00:00:00.000 Started by upstream project "autotest-per-patch" build number 132582 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.005 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.837 The recommended git tool is: git 00:00:00.838 using credential 00000000-0000-0000-0000-000000000002 00:00:00.840 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.850 Fetching changes from the remote Git repository 00:00:00.853 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.863 Using shallow fetch with depth 1 00:00:00.863 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.863 > git --version # timeout=10 00:00:00.871 > git --version # 'git version 2.39.2' 00:00:00.872 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.883 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.883 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.992 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.002 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.014 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.014 > git config core.sparsecheckout # timeout=10 00:00:07.025 > git read-tree -mu HEAD # timeout=10 00:00:07.041 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.062 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.062 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.169 [Pipeline] Start of Pipeline 00:00:07.180 [Pipeline] library 00:00:07.182 Loading library shm_lib@master 00:00:07.182 Library shm_lib@master is cached. Copying from home. 00:00:07.196 [Pipeline] node 00:00:07.205 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:07.207 [Pipeline] { 00:00:07.218 [Pipeline] catchError 00:00:07.219 [Pipeline] { 00:00:07.234 [Pipeline] wrap 00:00:07.242 [Pipeline] { 00:00:07.251 [Pipeline] stage 00:00:07.253 [Pipeline] { (Prologue) 00:00:07.275 [Pipeline] echo 00:00:07.276 Node: VM-host-WFP7 00:00:07.284 [Pipeline] cleanWs 00:00:07.295 [WS-CLEANUP] Deleting project workspace... 00:00:07.295 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.302 [WS-CLEANUP] done 00:00:07.497 [Pipeline] setCustomBuildProperty 00:00:07.566 [Pipeline] httpRequest 00:00:08.432 [Pipeline] echo 00:00:08.433 Sorcerer 10.211.164.101 is alive 00:00:08.439 [Pipeline] retry 00:00:08.441 [Pipeline] { 00:00:08.449 [Pipeline] httpRequest 00:00:08.453 HttpMethod: GET 00:00:08.454 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.454 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.474 Response Code: HTTP/1.1 200 OK 00:00:08.475 Success: Status code 200 is in the accepted range: 200,404 00:00:08.475 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.522 [Pipeline] } 00:00:24.536 [Pipeline] // retry 00:00:24.543 [Pipeline] sh 00:00:24.826 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.842 [Pipeline] httpRequest 00:00:27.862 [Pipeline] echo 00:00:27.864 Sorcerer 10.211.164.101 is dead 00:00:27.874 [Pipeline] httpRequest 00:00:28.269 [Pipeline] echo 00:00:28.271 Sorcerer 10.211.164.101 is alive 00:00:28.280 [Pipeline] retry 00:00:28.282 [Pipeline] { 00:00:28.296 [Pipeline] httpRequest 00:00:28.300 HttpMethod: GET 00:00:28.301 URL: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:28.301 Sending request to url: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:28.321 Response Code: HTTP/1.1 200 OK 00:00:28.322 Success: Status code 200 is in the accepted range: 200,404 00:00:28.322 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:04:26.283 [Pipeline] } 00:04:26.301 [Pipeline] // retry 00:04:26.310 [Pipeline] sh 00:04:26.591 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:04:29.134 [Pipeline] sh 00:04:29.413 + git -C spdk log --oneline -n5 00:04:29.413 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:04:29.413 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:04:29.413 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:04:29.413 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:04:29.413 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:04:29.431 [Pipeline] writeFile 00:04:29.446 [Pipeline] sh 00:04:29.727 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:29.739 [Pipeline] sh 00:04:30.056 + cat autorun-spdk.conf 00:04:30.056 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:30.056 SPDK_RUN_ASAN=1 00:04:30.056 SPDK_RUN_UBSAN=1 00:04:30.056 SPDK_TEST_RAID=1 00:04:30.056 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:30.063 RUN_NIGHTLY=0 00:04:30.065 [Pipeline] } 00:04:30.082 [Pipeline] // stage 00:04:30.094 [Pipeline] stage 00:04:30.096 [Pipeline] { (Run VM) 00:04:30.108 [Pipeline] sh 00:04:30.388 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:30.388 + echo 'Start stage prepare_nvme.sh' 00:04:30.388 Start stage prepare_nvme.sh 00:04:30.388 + [[ -n 5 ]] 00:04:30.388 + disk_prefix=ex5 00:04:30.388 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:04:30.388 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:04:30.388 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:04:30.388 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:30.388 ++ SPDK_RUN_ASAN=1 00:04:30.388 ++ SPDK_RUN_UBSAN=1 00:04:30.388 ++ SPDK_TEST_RAID=1 00:04:30.388 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:30.388 ++ RUN_NIGHTLY=0 00:04:30.388 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:04:30.388 + nvme_files=() 00:04:30.388 + declare -A nvme_files 00:04:30.388 + backend_dir=/var/lib/libvirt/images/backends 00:04:30.388 + nvme_files['nvme.img']=5G 00:04:30.388 + nvme_files['nvme-cmb.img']=5G 00:04:30.388 + nvme_files['nvme-multi0.img']=4G 00:04:30.388 + nvme_files['nvme-multi1.img']=4G 00:04:30.388 + nvme_files['nvme-multi2.img']=4G 00:04:30.388 + nvme_files['nvme-openstack.img']=8G 00:04:30.388 + nvme_files['nvme-zns.img']=5G 00:04:30.388 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:30.388 + (( SPDK_TEST_FTL == 1 )) 00:04:30.388 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:30.388 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:30.388 + for nvme in "${!nvme_files[@]}" 00:04:30.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:04:30.388 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:30.388 + for nvme in "${!nvme_files[@]}" 00:04:30.388 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:04:30.388 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:30.388 + for nvme in "${!nvme_files[@]}" 00:04:30.389 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:04:30.389 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:30.389 + for nvme in "${!nvme_files[@]}" 00:04:30.389 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:04:30.389 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:30.389 + for nvme in "${!nvme_files[@]}" 00:04:30.389 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:04:30.389 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:30.389 + for nvme in "${!nvme_files[@]}" 00:04:30.389 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:04:30.389 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:30.389 + for nvme in "${!nvme_files[@]}" 00:04:30.389 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:04:30.646 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:30.646 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:04:30.646 + echo 'End stage prepare_nvme.sh' 00:04:30.646 End stage prepare_nvme.sh 00:04:30.657 [Pipeline] sh 00:04:30.933 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:30.933 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:04:30.933 00:04:30.933 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:04:30.933 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:04:30.933 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:04:30.933 HELP=0 00:04:30.933 DRY_RUN=0 00:04:30.933 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:04:30.933 NVME_DISKS_TYPE=nvme,nvme, 00:04:30.933 NVME_AUTO_CREATE=0 00:04:30.933 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:04:30.933 NVME_CMB=,, 00:04:30.933 NVME_PMR=,, 00:04:30.933 NVME_ZNS=,, 00:04:30.933 NVME_MS=,, 00:04:30.933 NVME_FDP=,, 00:04:30.933 SPDK_VAGRANT_DISTRO=fedora39 00:04:30.933 SPDK_VAGRANT_VMCPU=10 00:04:30.933 SPDK_VAGRANT_VMRAM=12288 00:04:30.933 SPDK_VAGRANT_PROVIDER=libvirt 00:04:30.933 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:30.933 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:30.933 SPDK_OPENSTACK_NETWORK=0 00:04:30.933 VAGRANT_PACKAGE_BOX=0 00:04:30.933 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:04:30.933 FORCE_DISTRO=true 00:04:30.933 VAGRANT_BOX_VERSION= 00:04:30.933 EXTRA_VAGRANTFILES= 00:04:30.933 NIC_MODEL=virtio 00:04:30.933 00:04:30.933 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:04:30.933 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:04:33.469 Bringing machine 'default' up with 'libvirt' provider... 00:04:33.729 ==> default: Creating image (snapshot of base box volume). 00:04:33.989 ==> default: Creating domain with the following settings... 00:04:33.989 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732716244_73889dc4af05c0847470 00:04:33.989 ==> default: -- Domain type: kvm 00:04:33.989 ==> default: -- Cpus: 10 00:04:33.989 ==> default: -- Feature: acpi 00:04:33.989 ==> default: -- Feature: apic 00:04:33.989 ==> default: -- Feature: pae 00:04:33.989 ==> default: -- Memory: 12288M 00:04:33.989 ==> default: -- Memory Backing: hugepages: 00:04:33.989 ==> default: -- Management MAC: 00:04:33.989 ==> default: -- Loader: 00:04:33.989 ==> default: -- Nvram: 00:04:33.989 ==> default: -- Base box: spdk/fedora39 00:04:33.989 ==> default: -- Storage pool: default 00:04:33.989 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732716244_73889dc4af05c0847470.img (20G) 00:04:33.989 ==> default: -- Volume Cache: default 00:04:33.989 ==> default: -- Kernel: 00:04:33.989 ==> default: -- Initrd: 00:04:33.989 ==> default: -- Graphics Type: vnc 00:04:33.989 ==> default: -- Graphics Port: -1 00:04:33.989 ==> default: -- Graphics IP: 127.0.0.1 00:04:33.989 ==> default: -- Graphics Password: Not defined 00:04:33.989 ==> default: -- Video Type: cirrus 00:04:33.989 ==> default: -- Video VRAM: 9216 00:04:33.989 ==> default: -- Sound Type: 00:04:33.989 ==> default: -- Keymap: en-us 00:04:33.989 ==> default: -- TPM Path: 00:04:33.989 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:33.989 ==> default: -- Command line args: 00:04:33.989 ==> default: -> value=-device, 00:04:33.989 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:33.989 ==> default: -> value=-drive, 00:04:33.989 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:04:33.989 ==> default: -> value=-device, 00:04:33.989 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:33.989 ==> default: -> value=-device, 00:04:33.989 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:33.989 ==> default: -> value=-drive, 00:04:33.989 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:33.989 ==> default: -> value=-device, 00:04:33.989 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:33.989 ==> default: -> value=-drive, 00:04:33.989 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:33.989 ==> default: -> value=-device, 00:04:33.989 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:33.989 ==> default: -> value=-drive, 00:04:33.989 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:33.989 ==> default: -> value=-device, 00:04:33.989 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:33.989 ==> default: Creating shared folders metadata... 00:04:34.249 ==> default: Starting domain. 00:04:36.155 ==> default: Waiting for domain to get an IP address... 00:04:54.271 ==> default: Waiting for SSH to become available... 00:04:54.271 ==> default: Configuring and enabling network interfaces... 00:04:59.547 default: SSH address: 192.168.121.234:22 00:04:59.547 default: SSH username: vagrant 00:04:59.547 default: SSH auth method: private key 00:05:01.491 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:09.650 ==> default: Mounting SSHFS shared folder... 00:05:12.190 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:12.190 ==> default: Checking Mount.. 00:05:13.569 ==> default: Folder Successfully Mounted! 00:05:13.569 ==> default: Running provisioner: file... 00:05:14.949 default: ~/.gitconfig => .gitconfig 00:05:15.519 00:05:15.519 SUCCESS! 00:05:15.519 00:05:15.519 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:05:15.519 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:15.519 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:05:15.519 00:05:15.529 [Pipeline] } 00:05:15.544 [Pipeline] // stage 00:05:15.555 [Pipeline] dir 00:05:15.555 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:05:15.557 [Pipeline] { 00:05:15.569 [Pipeline] catchError 00:05:15.571 [Pipeline] { 00:05:15.585 [Pipeline] sh 00:05:15.868 + vagrant ssh-config --host vagrant 00:05:15.868 + sed -ne /^Host/,$p 00:05:15.868 + tee ssh_conf 00:05:19.160 Host vagrant 00:05:19.161 HostName 192.168.121.234 00:05:19.161 User vagrant 00:05:19.161 Port 22 00:05:19.161 UserKnownHostsFile /dev/null 00:05:19.161 StrictHostKeyChecking no 00:05:19.161 PasswordAuthentication no 00:05:19.161 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:19.161 IdentitiesOnly yes 00:05:19.161 LogLevel FATAL 00:05:19.161 ForwardAgent yes 00:05:19.161 ForwardX11 yes 00:05:19.161 00:05:19.178 [Pipeline] withEnv 00:05:19.181 [Pipeline] { 00:05:19.199 [Pipeline] sh 00:05:19.483 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:05:19.483 source /etc/os-release 00:05:19.483 [[ -e /image.version ]] && img=$(< /image.version) 00:05:19.483 # Minimal, systemd-like check. 00:05:19.483 if [[ -e /.dockerenv ]]; then 00:05:19.483 # Clear garbage from the node's name: 00:05:19.483 # agt-er_autotest_547-896 -> autotest_547-896 00:05:19.483 # $HOSTNAME is the actual container id 00:05:19.483 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:19.483 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:19.483 # We can assume this is a mount from a host where container is running, 00:05:19.483 # so fetch its hostname to easily identify the target swarm worker. 00:05:19.483 container="$(< /etc/hostname) ($agent)" 00:05:19.483 else 00:05:19.483 # Fallback 00:05:19.483 container=$agent 00:05:19.483 fi 00:05:19.483 fi 00:05:19.483 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:19.483 00:05:19.757 [Pipeline] } 00:05:19.776 [Pipeline] // withEnv 00:05:19.785 [Pipeline] setCustomBuildProperty 00:05:19.801 [Pipeline] stage 00:05:19.803 [Pipeline] { (Tests) 00:05:19.820 [Pipeline] sh 00:05:20.101 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:20.417 [Pipeline] sh 00:05:20.711 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:20.987 [Pipeline] timeout 00:05:20.987 Timeout set to expire in 1 hr 30 min 00:05:20.989 [Pipeline] { 00:05:21.004 [Pipeline] sh 00:05:21.288 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:05:21.854 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:05:21.866 [Pipeline] sh 00:05:22.145 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:05:22.419 [Pipeline] sh 00:05:22.703 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:22.979 [Pipeline] sh 00:05:23.261 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:05:23.521 ++ readlink -f spdk_repo 00:05:23.521 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:23.521 + [[ -n /home/vagrant/spdk_repo ]] 00:05:23.521 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:23.521 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:23.521 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:23.521 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:23.521 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:23.521 + [[ raid-vg-autotest == pkgdep-* ]] 00:05:23.521 + cd /home/vagrant/spdk_repo 00:05:23.521 + source /etc/os-release 00:05:23.521 ++ NAME='Fedora Linux' 00:05:23.521 ++ VERSION='39 (Cloud Edition)' 00:05:23.521 ++ ID=fedora 00:05:23.521 ++ VERSION_ID=39 00:05:23.521 ++ VERSION_CODENAME= 00:05:23.521 ++ PLATFORM_ID=platform:f39 00:05:23.521 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:23.521 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:23.521 ++ LOGO=fedora-logo-icon 00:05:23.521 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:23.521 ++ HOME_URL=https://fedoraproject.org/ 00:05:23.521 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:23.521 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:23.521 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:23.521 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:23.521 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:23.521 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:23.521 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:23.521 ++ SUPPORT_END=2024-11-12 00:05:23.521 ++ VARIANT='Cloud Edition' 00:05:23.521 ++ VARIANT_ID=cloud 00:05:23.521 + uname -a 00:05:23.521 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:23.521 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:24.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.091 Hugepages 00:05:24.091 node hugesize free / total 00:05:24.091 node0 1048576kB 0 / 0 00:05:24.091 node0 2048kB 0 / 0 00:05:24.091 00:05:24.091 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:24.091 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:24.091 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:24.091 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:24.091 + rm -f /tmp/spdk-ld-path 00:05:24.091 + source autorun-spdk.conf 00:05:24.091 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:24.091 ++ SPDK_RUN_ASAN=1 00:05:24.091 ++ SPDK_RUN_UBSAN=1 00:05:24.091 ++ SPDK_TEST_RAID=1 00:05:24.091 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:24.091 ++ RUN_NIGHTLY=0 00:05:24.091 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:24.091 + [[ -n '' ]] 00:05:24.091 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:24.091 + for M in /var/spdk/build-*-manifest.txt 00:05:24.091 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:24.091 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:24.091 + for M in /var/spdk/build-*-manifest.txt 00:05:24.091 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:24.091 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:24.091 + for M in /var/spdk/build-*-manifest.txt 00:05:24.091 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:24.091 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:24.091 ++ uname 00:05:24.091 + [[ Linux == \L\i\n\u\x ]] 00:05:24.091 + sudo dmesg -T 00:05:24.355 + sudo dmesg --clear 00:05:24.355 + sudo dmesg -Tw 00:05:24.355 + dmesg_pid=5424 00:05:24.356 + [[ Fedora Linux == FreeBSD ]] 00:05:24.356 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:24.356 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:24.356 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:24.356 + [[ -x /usr/src/fio-static/fio ]] 00:05:24.356 + export FIO_BIN=/usr/src/fio-static/fio 00:05:24.356 + FIO_BIN=/usr/src/fio-static/fio 00:05:24.356 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:24.356 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:24.356 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:24.356 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:24.356 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:24.356 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:24.356 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:24.356 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:24.356 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:24.617 14:04:55 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:24.617 14:04:55 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:24.617 14:04:55 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:24.617 14:04:55 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:05:24.617 14:04:55 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:05:24.617 14:04:55 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:05:24.617 14:04:55 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:24.617 14:04:55 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:05:24.617 14:04:55 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:24.617 14:04:55 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:24.617 14:04:55 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:05:24.617 14:04:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.617 14:04:55 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:24.617 14:04:55 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:24.617 14:04:55 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.617 14:04:55 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.617 14:04:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.617 14:04:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.617 14:04:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.617 14:04:55 -- paths/export.sh@5 -- $ export PATH 00:05:24.617 14:04:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.617 14:04:55 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:24.617 14:04:55 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:24.617 14:04:55 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732716295.XXXXXX 00:05:24.617 14:04:55 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732716295.rO7bmK 00:05:24.617 14:04:55 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:24.617 14:04:55 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:24.617 14:04:55 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:24.617 14:04:55 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:24.617 14:04:55 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:24.617 14:04:55 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:24.617 14:04:55 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:24.617 14:04:55 -- common/autotest_common.sh@10 -- $ set +x 00:05:24.617 14:04:55 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:05:24.617 14:04:55 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:24.617 14:04:55 -- pm/common@17 -- $ local monitor 00:05:24.617 14:04:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.617 14:04:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.617 14:04:55 -- pm/common@25 -- $ sleep 1 00:05:24.617 14:04:55 -- pm/common@21 -- $ date +%s 00:05:24.617 14:04:55 -- pm/common@21 -- $ date +%s 00:05:24.617 14:04:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732716295 00:05:24.617 14:04:55 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732716295 00:05:24.617 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732716295_collect-vmstat.pm.log 00:05:24.617 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732716295_collect-cpu-load.pm.log 00:05:25.558 14:04:56 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:25.558 14:04:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:25.558 14:04:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:25.558 14:04:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:25.558 14:04:56 -- spdk/autobuild.sh@16 -- $ date -u 00:05:25.558 Wed Nov 27 02:04:56 PM UTC 2024 00:05:25.558 14:04:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:25.558 v25.01-pre-276-g35cd3e84d 00:05:25.558 14:04:56 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:25.558 14:04:56 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:25.558 14:04:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:25.558 14:04:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:25.558 14:04:56 -- common/autotest_common.sh@10 -- $ set +x 00:05:25.817 ************************************ 00:05:25.817 START TEST asan 00:05:25.817 ************************************ 00:05:25.817 using asan 00:05:25.817 14:04:56 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:05:25.817 00:05:25.817 real 0m0.000s 00:05:25.817 user 0m0.000s 00:05:25.817 sys 0m0.000s 00:05:25.817 14:04:56 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:25.817 14:04:56 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:25.817 ************************************ 00:05:25.817 END TEST asan 00:05:25.817 ************************************ 00:05:25.817 14:04:56 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:25.817 14:04:56 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:25.817 14:04:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:25.817 14:04:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:25.817 14:04:56 -- common/autotest_common.sh@10 -- $ set +x 00:05:25.817 ************************************ 00:05:25.817 START TEST ubsan 00:05:25.817 ************************************ 00:05:25.817 using ubsan 00:05:25.817 14:04:56 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:25.817 00:05:25.817 real 0m0.000s 00:05:25.817 user 0m0.000s 00:05:25.817 sys 0m0.000s 00:05:25.817 14:04:56 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:25.817 14:04:56 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:25.817 ************************************ 00:05:25.817 END TEST ubsan 00:05:25.817 ************************************ 00:05:25.817 14:04:56 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:25.817 14:04:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:25.817 14:04:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:25.817 14:04:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:25.817 14:04:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:25.817 14:04:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:25.817 14:04:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:25.817 14:04:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:25.817 14:04:56 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:05:26.076 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:26.076 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:26.335 Using 'verbs' RDMA provider 00:05:42.605 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:00.699 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:00.699 Creating mk/config.mk...done. 00:06:00.699 Creating mk/cc.flags.mk...done. 00:06:00.699 Type 'make' to build. 00:06:00.699 14:05:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:06:00.699 14:05:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:00.699 14:05:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:00.699 14:05:29 -- common/autotest_common.sh@10 -- $ set +x 00:06:00.699 ************************************ 00:06:00.699 START TEST make 00:06:00.699 ************************************ 00:06:00.699 14:05:29 make -- common/autotest_common.sh@1129 -- $ make -j10 00:06:00.699 make[1]: Nothing to be done for 'all'. 00:06:10.696 The Meson build system 00:06:10.696 Version: 1.5.0 00:06:10.696 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:06:10.696 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:06:10.696 Build type: native build 00:06:10.696 Program cat found: YES (/usr/bin/cat) 00:06:10.696 Project name: DPDK 00:06:10.696 Project version: 24.03.0 00:06:10.696 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:10.696 C linker for the host machine: cc ld.bfd 2.40-14 00:06:10.696 Host machine cpu family: x86_64 00:06:10.696 Host machine cpu: x86_64 00:06:10.696 Message: ## Building in Developer Mode ## 00:06:10.696 Program pkg-config found: YES (/usr/bin/pkg-config) 00:06:10.696 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:06:10.696 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:06:10.696 Program python3 found: YES (/usr/bin/python3) 00:06:10.696 Program cat found: YES (/usr/bin/cat) 00:06:10.696 Compiler for C supports arguments -march=native: YES 00:06:10.696 Checking for size of "void *" : 8 00:06:10.696 Checking for size of "void *" : 8 (cached) 00:06:10.696 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:06:10.696 Library m found: YES 00:06:10.696 Library numa found: YES 00:06:10.696 Has header "numaif.h" : YES 00:06:10.696 Library fdt found: NO 00:06:10.696 Library execinfo found: NO 00:06:10.696 Has header "execinfo.h" : YES 00:06:10.697 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:10.697 Run-time dependency libarchive found: NO (tried pkgconfig) 00:06:10.697 Run-time dependency libbsd found: NO (tried pkgconfig) 00:06:10.697 Run-time dependency jansson found: NO (tried pkgconfig) 00:06:10.697 Run-time dependency openssl found: YES 3.1.1 00:06:10.697 Run-time dependency libpcap found: YES 1.10.4 00:06:10.697 Has header "pcap.h" with dependency libpcap: YES 00:06:10.697 Compiler for C supports arguments -Wcast-qual: YES 00:06:10.697 Compiler for C supports arguments -Wdeprecated: YES 00:06:10.697 Compiler for C supports arguments -Wformat: YES 00:06:10.697 Compiler for C supports arguments -Wformat-nonliteral: NO 00:06:10.697 Compiler for C supports arguments -Wformat-security: NO 00:06:10.697 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:10.697 Compiler for C supports arguments -Wmissing-prototypes: YES 00:06:10.697 Compiler for C supports arguments -Wnested-externs: YES 00:06:10.697 Compiler for C supports arguments -Wold-style-definition: YES 00:06:10.697 Compiler for C supports arguments -Wpointer-arith: YES 00:06:10.697 Compiler for C supports arguments -Wsign-compare: YES 00:06:10.697 Compiler for C supports arguments -Wstrict-prototypes: YES 00:06:10.697 Compiler for C supports arguments -Wundef: YES 00:06:10.697 Compiler for C supports arguments -Wwrite-strings: YES 00:06:10.697 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:06:10.697 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:06:10.697 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:10.697 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:06:10.697 Program objdump found: YES (/usr/bin/objdump) 00:06:10.697 Compiler for C supports arguments -mavx512f: YES 00:06:10.697 Checking if "AVX512 checking" compiles: YES 00:06:10.697 Fetching value of define "__SSE4_2__" : 1 00:06:10.697 Fetching value of define "__AES__" : 1 00:06:10.697 Fetching value of define "__AVX__" : 1 00:06:10.697 Fetching value of define "__AVX2__" : 1 00:06:10.697 Fetching value of define "__AVX512BW__" : 1 00:06:10.697 Fetching value of define "__AVX512CD__" : 1 00:06:10.697 Fetching value of define "__AVX512DQ__" : 1 00:06:10.697 Fetching value of define "__AVX512F__" : 1 00:06:10.697 Fetching value of define "__AVX512VL__" : 1 00:06:10.697 Fetching value of define "__PCLMUL__" : 1 00:06:10.697 Fetching value of define "__RDRND__" : 1 00:06:10.697 Fetching value of define "__RDSEED__" : 1 00:06:10.697 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:06:10.697 Fetching value of define "__znver1__" : (undefined) 00:06:10.697 Fetching value of define "__znver2__" : (undefined) 00:06:10.697 Fetching value of define "__znver3__" : (undefined) 00:06:10.697 Fetching value of define "__znver4__" : (undefined) 00:06:10.697 Library asan found: YES 00:06:10.697 Compiler for C supports arguments -Wno-format-truncation: YES 00:06:10.697 Message: lib/log: Defining dependency "log" 00:06:10.697 Message: lib/kvargs: Defining dependency "kvargs" 00:06:10.697 Message: lib/telemetry: Defining dependency "telemetry" 00:06:10.697 Library rt found: YES 00:06:10.697 Checking for function "getentropy" : NO 00:06:10.697 Message: lib/eal: Defining dependency "eal" 00:06:10.697 Message: lib/ring: Defining dependency "ring" 00:06:10.697 Message: lib/rcu: Defining dependency "rcu" 00:06:10.697 Message: lib/mempool: Defining dependency "mempool" 00:06:10.697 Message: lib/mbuf: Defining dependency "mbuf" 00:06:10.697 Fetching value of define "__PCLMUL__" : 1 (cached) 00:06:10.697 Fetching value of define "__AVX512F__" : 1 (cached) 00:06:10.697 Fetching value of define "__AVX512BW__" : 1 (cached) 00:06:10.697 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:06:10.697 Fetching value of define "__AVX512VL__" : 1 (cached) 00:06:10.697 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:06:10.697 Compiler for C supports arguments -mpclmul: YES 00:06:10.697 Compiler for C supports arguments -maes: YES 00:06:10.697 Compiler for C supports arguments -mavx512f: YES (cached) 00:06:10.697 Compiler for C supports arguments -mavx512bw: YES 00:06:10.697 Compiler for C supports arguments -mavx512dq: YES 00:06:10.697 Compiler for C supports arguments -mavx512vl: YES 00:06:10.697 Compiler for C supports arguments -mvpclmulqdq: YES 00:06:10.697 Compiler for C supports arguments -mavx2: YES 00:06:10.697 Compiler for C supports arguments -mavx: YES 00:06:10.697 Message: lib/net: Defining dependency "net" 00:06:10.697 Message: lib/meter: Defining dependency "meter" 00:06:10.697 Message: lib/ethdev: Defining dependency "ethdev" 00:06:10.697 Message: lib/pci: Defining dependency "pci" 00:06:10.697 Message: lib/cmdline: Defining dependency "cmdline" 00:06:10.697 Message: lib/hash: Defining dependency "hash" 00:06:10.697 Message: lib/timer: Defining dependency "timer" 00:06:10.697 Message: lib/compressdev: Defining dependency "compressdev" 00:06:10.697 Message: lib/cryptodev: Defining dependency "cryptodev" 00:06:10.697 Message: lib/dmadev: Defining dependency "dmadev" 00:06:10.697 Compiler for C supports arguments -Wno-cast-qual: YES 00:06:10.697 Message: lib/power: Defining dependency "power" 00:06:10.697 Message: lib/reorder: Defining dependency "reorder" 00:06:10.697 Message: lib/security: Defining dependency "security" 00:06:10.697 Has header "linux/userfaultfd.h" : YES 00:06:10.697 Has header "linux/vduse.h" : YES 00:06:10.697 Message: lib/vhost: Defining dependency "vhost" 00:06:10.697 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:06:10.697 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:06:10.697 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:06:10.697 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:06:10.697 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:06:10.697 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:06:10.697 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:06:10.697 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:06:10.697 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:06:10.697 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:06:10.697 Program doxygen found: YES (/usr/local/bin/doxygen) 00:06:10.697 Configuring doxy-api-html.conf using configuration 00:06:10.697 Configuring doxy-api-man.conf using configuration 00:06:10.697 Program mandb found: YES (/usr/bin/mandb) 00:06:10.697 Program sphinx-build found: NO 00:06:10.697 Configuring rte_build_config.h using configuration 00:06:10.697 Message: 00:06:10.697 ================= 00:06:10.697 Applications Enabled 00:06:10.697 ================= 00:06:10.697 00:06:10.697 apps: 00:06:10.697 00:06:10.697 00:06:10.697 Message: 00:06:10.697 ================= 00:06:10.697 Libraries Enabled 00:06:10.697 ================= 00:06:10.697 00:06:10.697 libs: 00:06:10.697 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:06:10.697 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:06:10.697 cryptodev, dmadev, power, reorder, security, vhost, 00:06:10.697 00:06:10.697 Message: 00:06:10.697 =============== 00:06:10.697 Drivers Enabled 00:06:10.697 =============== 00:06:10.697 00:06:10.697 common: 00:06:10.697 00:06:10.697 bus: 00:06:10.697 pci, vdev, 00:06:10.697 mempool: 00:06:10.697 ring, 00:06:10.697 dma: 00:06:10.697 00:06:10.697 net: 00:06:10.697 00:06:10.697 crypto: 00:06:10.697 00:06:10.697 compress: 00:06:10.697 00:06:10.697 vdpa: 00:06:10.697 00:06:10.697 00:06:10.697 Message: 00:06:10.697 ================= 00:06:10.697 Content Skipped 00:06:10.697 ================= 00:06:10.697 00:06:10.697 apps: 00:06:10.697 dumpcap: explicitly disabled via build config 00:06:10.697 graph: explicitly disabled via build config 00:06:10.697 pdump: explicitly disabled via build config 00:06:10.697 proc-info: explicitly disabled via build config 00:06:10.697 test-acl: explicitly disabled via build config 00:06:10.697 test-bbdev: explicitly disabled via build config 00:06:10.697 test-cmdline: explicitly disabled via build config 00:06:10.697 test-compress-perf: explicitly disabled via build config 00:06:10.697 test-crypto-perf: explicitly disabled via build config 00:06:10.697 test-dma-perf: explicitly disabled via build config 00:06:10.697 test-eventdev: explicitly disabled via build config 00:06:10.697 test-fib: explicitly disabled via build config 00:06:10.697 test-flow-perf: explicitly disabled via build config 00:06:10.697 test-gpudev: explicitly disabled via build config 00:06:10.697 test-mldev: explicitly disabled via build config 00:06:10.697 test-pipeline: explicitly disabled via build config 00:06:10.697 test-pmd: explicitly disabled via build config 00:06:10.697 test-regex: explicitly disabled via build config 00:06:10.697 test-sad: explicitly disabled via build config 00:06:10.697 test-security-perf: explicitly disabled via build config 00:06:10.697 00:06:10.697 libs: 00:06:10.697 argparse: explicitly disabled via build config 00:06:10.697 metrics: explicitly disabled via build config 00:06:10.697 acl: explicitly disabled via build config 00:06:10.697 bbdev: explicitly disabled via build config 00:06:10.697 bitratestats: explicitly disabled via build config 00:06:10.697 bpf: explicitly disabled via build config 00:06:10.697 cfgfile: explicitly disabled via build config 00:06:10.697 distributor: explicitly disabled via build config 00:06:10.697 efd: explicitly disabled via build config 00:06:10.697 eventdev: explicitly disabled via build config 00:06:10.697 dispatcher: explicitly disabled via build config 00:06:10.697 gpudev: explicitly disabled via build config 00:06:10.697 gro: explicitly disabled via build config 00:06:10.697 gso: explicitly disabled via build config 00:06:10.697 ip_frag: explicitly disabled via build config 00:06:10.697 jobstats: explicitly disabled via build config 00:06:10.697 latencystats: explicitly disabled via build config 00:06:10.697 lpm: explicitly disabled via build config 00:06:10.697 member: explicitly disabled via build config 00:06:10.697 pcapng: explicitly disabled via build config 00:06:10.697 rawdev: explicitly disabled via build config 00:06:10.697 regexdev: explicitly disabled via build config 00:06:10.697 mldev: explicitly disabled via build config 00:06:10.698 rib: explicitly disabled via build config 00:06:10.698 sched: explicitly disabled via build config 00:06:10.698 stack: explicitly disabled via build config 00:06:10.698 ipsec: explicitly disabled via build config 00:06:10.698 pdcp: explicitly disabled via build config 00:06:10.698 fib: explicitly disabled via build config 00:06:10.698 port: explicitly disabled via build config 00:06:10.698 pdump: explicitly disabled via build config 00:06:10.698 table: explicitly disabled via build config 00:06:10.698 pipeline: explicitly disabled via build config 00:06:10.698 graph: explicitly disabled via build config 00:06:10.698 node: explicitly disabled via build config 00:06:10.698 00:06:10.698 drivers: 00:06:10.698 common/cpt: not in enabled drivers build config 00:06:10.698 common/dpaax: not in enabled drivers build config 00:06:10.698 common/iavf: not in enabled drivers build config 00:06:10.698 common/idpf: not in enabled drivers build config 00:06:10.698 common/ionic: not in enabled drivers build config 00:06:10.698 common/mvep: not in enabled drivers build config 00:06:10.698 common/octeontx: not in enabled drivers build config 00:06:10.698 bus/auxiliary: not in enabled drivers build config 00:06:10.698 bus/cdx: not in enabled drivers build config 00:06:10.698 bus/dpaa: not in enabled drivers build config 00:06:10.698 bus/fslmc: not in enabled drivers build config 00:06:10.698 bus/ifpga: not in enabled drivers build config 00:06:10.698 bus/platform: not in enabled drivers build config 00:06:10.698 bus/uacce: not in enabled drivers build config 00:06:10.698 bus/vmbus: not in enabled drivers build config 00:06:10.698 common/cnxk: not in enabled drivers build config 00:06:10.698 common/mlx5: not in enabled drivers build config 00:06:10.698 common/nfp: not in enabled drivers build config 00:06:10.698 common/nitrox: not in enabled drivers build config 00:06:10.698 common/qat: not in enabled drivers build config 00:06:10.698 common/sfc_efx: not in enabled drivers build config 00:06:10.698 mempool/bucket: not in enabled drivers build config 00:06:10.698 mempool/cnxk: not in enabled drivers build config 00:06:10.698 mempool/dpaa: not in enabled drivers build config 00:06:10.698 mempool/dpaa2: not in enabled drivers build config 00:06:10.698 mempool/octeontx: not in enabled drivers build config 00:06:10.698 mempool/stack: not in enabled drivers build config 00:06:10.698 dma/cnxk: not in enabled drivers build config 00:06:10.698 dma/dpaa: not in enabled drivers build config 00:06:10.698 dma/dpaa2: not in enabled drivers build config 00:06:10.698 dma/hisilicon: not in enabled drivers build config 00:06:10.698 dma/idxd: not in enabled drivers build config 00:06:10.698 dma/ioat: not in enabled drivers build config 00:06:10.698 dma/skeleton: not in enabled drivers build config 00:06:10.698 net/af_packet: not in enabled drivers build config 00:06:10.698 net/af_xdp: not in enabled drivers build config 00:06:10.698 net/ark: not in enabled drivers build config 00:06:10.698 net/atlantic: not in enabled drivers build config 00:06:10.698 net/avp: not in enabled drivers build config 00:06:10.698 net/axgbe: not in enabled drivers build config 00:06:10.698 net/bnx2x: not in enabled drivers build config 00:06:10.698 net/bnxt: not in enabled drivers build config 00:06:10.698 net/bonding: not in enabled drivers build config 00:06:10.698 net/cnxk: not in enabled drivers build config 00:06:10.698 net/cpfl: not in enabled drivers build config 00:06:10.698 net/cxgbe: not in enabled drivers build config 00:06:10.698 net/dpaa: not in enabled drivers build config 00:06:10.698 net/dpaa2: not in enabled drivers build config 00:06:10.698 net/e1000: not in enabled drivers build config 00:06:10.698 net/ena: not in enabled drivers build config 00:06:10.698 net/enetc: not in enabled drivers build config 00:06:10.698 net/enetfec: not in enabled drivers build config 00:06:10.698 net/enic: not in enabled drivers build config 00:06:10.698 net/failsafe: not in enabled drivers build config 00:06:10.698 net/fm10k: not in enabled drivers build config 00:06:10.698 net/gve: not in enabled drivers build config 00:06:10.698 net/hinic: not in enabled drivers build config 00:06:10.698 net/hns3: not in enabled drivers build config 00:06:10.698 net/i40e: not in enabled drivers build config 00:06:10.698 net/iavf: not in enabled drivers build config 00:06:10.698 net/ice: not in enabled drivers build config 00:06:10.698 net/idpf: not in enabled drivers build config 00:06:10.698 net/igc: not in enabled drivers build config 00:06:10.698 net/ionic: not in enabled drivers build config 00:06:10.698 net/ipn3ke: not in enabled drivers build config 00:06:10.698 net/ixgbe: not in enabled drivers build config 00:06:10.698 net/mana: not in enabled drivers build config 00:06:10.698 net/memif: not in enabled drivers build config 00:06:10.698 net/mlx4: not in enabled drivers build config 00:06:10.698 net/mlx5: not in enabled drivers build config 00:06:10.698 net/mvneta: not in enabled drivers build config 00:06:10.698 net/mvpp2: not in enabled drivers build config 00:06:10.698 net/netvsc: not in enabled drivers build config 00:06:10.698 net/nfb: not in enabled drivers build config 00:06:10.698 net/nfp: not in enabled drivers build config 00:06:10.698 net/ngbe: not in enabled drivers build config 00:06:10.698 net/null: not in enabled drivers build config 00:06:10.698 net/octeontx: not in enabled drivers build config 00:06:10.698 net/octeon_ep: not in enabled drivers build config 00:06:10.698 net/pcap: not in enabled drivers build config 00:06:10.698 net/pfe: not in enabled drivers build config 00:06:10.698 net/qede: not in enabled drivers build config 00:06:10.698 net/ring: not in enabled drivers build config 00:06:10.698 net/sfc: not in enabled drivers build config 00:06:10.698 net/softnic: not in enabled drivers build config 00:06:10.698 net/tap: not in enabled drivers build config 00:06:10.698 net/thunderx: not in enabled drivers build config 00:06:10.698 net/txgbe: not in enabled drivers build config 00:06:10.698 net/vdev_netvsc: not in enabled drivers build config 00:06:10.698 net/vhost: not in enabled drivers build config 00:06:10.698 net/virtio: not in enabled drivers build config 00:06:10.698 net/vmxnet3: not in enabled drivers build config 00:06:10.698 raw/*: missing internal dependency, "rawdev" 00:06:10.698 crypto/armv8: not in enabled drivers build config 00:06:10.698 crypto/bcmfs: not in enabled drivers build config 00:06:10.698 crypto/caam_jr: not in enabled drivers build config 00:06:10.698 crypto/ccp: not in enabled drivers build config 00:06:10.698 crypto/cnxk: not in enabled drivers build config 00:06:10.698 crypto/dpaa_sec: not in enabled drivers build config 00:06:10.698 crypto/dpaa2_sec: not in enabled drivers build config 00:06:10.698 crypto/ipsec_mb: not in enabled drivers build config 00:06:10.698 crypto/mlx5: not in enabled drivers build config 00:06:10.698 crypto/mvsam: not in enabled drivers build config 00:06:10.698 crypto/nitrox: not in enabled drivers build config 00:06:10.698 crypto/null: not in enabled drivers build config 00:06:10.698 crypto/octeontx: not in enabled drivers build config 00:06:10.698 crypto/openssl: not in enabled drivers build config 00:06:10.698 crypto/scheduler: not in enabled drivers build config 00:06:10.698 crypto/uadk: not in enabled drivers build config 00:06:10.698 crypto/virtio: not in enabled drivers build config 00:06:10.698 compress/isal: not in enabled drivers build config 00:06:10.698 compress/mlx5: not in enabled drivers build config 00:06:10.698 compress/nitrox: not in enabled drivers build config 00:06:10.698 compress/octeontx: not in enabled drivers build config 00:06:10.698 compress/zlib: not in enabled drivers build config 00:06:10.698 regex/*: missing internal dependency, "regexdev" 00:06:10.698 ml/*: missing internal dependency, "mldev" 00:06:10.698 vdpa/ifc: not in enabled drivers build config 00:06:10.698 vdpa/mlx5: not in enabled drivers build config 00:06:10.698 vdpa/nfp: not in enabled drivers build config 00:06:10.698 vdpa/sfc: not in enabled drivers build config 00:06:10.698 event/*: missing internal dependency, "eventdev" 00:06:10.698 baseband/*: missing internal dependency, "bbdev" 00:06:10.698 gpu/*: missing internal dependency, "gpudev" 00:06:10.698 00:06:10.698 00:06:10.698 Build targets in project: 85 00:06:10.698 00:06:10.698 DPDK 24.03.0 00:06:10.698 00:06:10.698 User defined options 00:06:10.698 buildtype : debug 00:06:10.698 default_library : shared 00:06:10.698 libdir : lib 00:06:10.698 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:10.698 b_sanitize : address 00:06:10.698 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:06:10.698 c_link_args : 00:06:10.698 cpu_instruction_set: native 00:06:10.698 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:06:10.698 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:06:10.698 enable_docs : false 00:06:10.698 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:06:10.698 enable_kmods : false 00:06:10.698 max_lcores : 128 00:06:10.698 tests : false 00:06:10.698 00:06:10.698 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:11.268 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:06:11.268 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:06:11.268 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:06:11.268 [3/268] Linking static target lib/librte_kvargs.a 00:06:11.268 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:06:11.268 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:06:11.268 [6/268] Linking static target lib/librte_log.a 00:06:11.836 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:06:11.836 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:06:11.836 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:06:11.836 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:06:11.836 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:06:11.836 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:06:11.836 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:06:11.836 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:06:11.836 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:06:11.836 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:06:12.094 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:06:12.094 [18/268] Linking static target lib/librte_telemetry.a 00:06:12.353 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:06:12.353 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:06:12.353 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:06:12.353 [22/268] Linking target lib/librte_log.so.24.1 00:06:12.353 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:06:12.610 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:06:12.610 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:06:12.610 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:06:12.610 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:06:12.610 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:06:12.868 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:06:12.868 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:06:12.868 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:06:12.868 [32/268] Linking target lib/librte_kvargs.so.24.1 00:06:13.126 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:06:13.126 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:06:13.126 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:06:13.126 [36/268] Linking target lib/librte_telemetry.so.24.1 00:06:13.126 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:06:13.126 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:06:13.385 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:06:13.385 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:06:13.385 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:06:13.385 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:06:13.385 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:06:13.385 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:06:13.385 [45/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:06:13.385 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:06:13.643 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:06:13.643 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:06:13.643 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:06:13.902 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:06:13.902 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:06:14.159 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:06:14.160 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:06:14.160 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:06:14.160 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:06:14.160 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:06:14.160 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:06:14.160 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:06:14.160 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:06:14.418 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:06:14.418 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:06:14.675 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:06:14.675 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:06:14.675 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:06:14.675 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:06:14.675 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:06:14.675 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:06:14.675 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:06:14.936 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:06:15.195 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:06:15.195 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:06:15.195 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:06:15.195 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:06:15.195 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:06:15.195 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:06:15.195 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:06:15.195 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:06:15.454 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:06:15.454 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:06:15.713 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:06:15.713 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:06:15.713 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:06:15.713 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:06:15.713 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:06:15.713 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:06:15.713 [86/268] Linking static target lib/librte_ring.a 00:06:15.973 [87/268] Linking static target lib/librte_eal.a 00:06:15.973 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:06:15.973 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:06:15.973 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:06:15.973 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:06:16.233 [92/268] Linking static target lib/librte_rcu.a 00:06:16.233 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:06:16.233 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.233 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:06:16.233 [96/268] Linking static target lib/librte_mempool.a 00:06:16.492 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:06:16.492 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:06:16.492 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:06:16.492 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:06:16.492 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:06:16.751 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:06:16.751 [103/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:06:17.010 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:06:17.010 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:06:17.269 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:06:17.269 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:06:17.269 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:06:17.269 [109/268] Linking static target lib/librte_mbuf.a 00:06:17.269 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:06:17.269 [111/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:06:17.528 [112/268] Linking static target lib/librte_meter.a 00:06:17.528 [113/268] Linking static target lib/librte_net.a 00:06:17.528 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:06:17.528 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:06:17.786 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:06:17.786 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:06:18.044 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:06:18.044 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:06:18.302 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:06:18.560 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:06:18.560 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:06:18.560 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:06:18.560 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:06:18.818 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:06:18.818 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:06:18.818 [127/268] Linking static target lib/librte_pci.a 00:06:18.818 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:06:19.120 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:06:19.120 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:06:19.120 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:06:19.120 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:06:19.120 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:06:19.120 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:06:19.410 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:06:19.410 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:06:19.410 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:06:19.410 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:06:19.410 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:06:19.410 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:06:19.410 [141/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:19.410 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:06:19.410 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:06:19.410 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:06:19.669 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:06:19.669 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:06:19.669 [147/268] Linking static target lib/librte_cmdline.a 00:06:19.926 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:06:19.926 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:06:20.185 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:06:20.443 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:06:20.443 [152/268] Linking static target lib/librte_timer.a 00:06:20.443 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:06:20.702 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:06:20.702 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:06:20.702 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:06:20.961 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:06:20.961 [158/268] Linking static target lib/librte_hash.a 00:06:20.961 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:06:20.961 [160/268] Linking static target lib/librte_compressdev.a 00:06:20.961 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:06:20.961 [162/268] Linking static target lib/librte_ethdev.a 00:06:21.220 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.220 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:06:21.479 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:06:21.479 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:06:21.479 [167/268] Linking static target lib/librte_dmadev.a 00:06:21.479 [168/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:06:21.479 [169/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:06:21.479 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:06:21.737 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:06:21.996 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:06:21.996 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:06:22.254 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:06:22.254 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.254 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.512 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:06:22.513 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:06:22.513 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:22.513 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:06:22.513 [181/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:06:22.771 [182/268] Linking static target lib/librte_cryptodev.a 00:06:22.771 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:22.771 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:22.771 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:22.771 [186/268] Linking static target lib/librte_power.a 00:06:23.030 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:23.288 [188/268] Linking static target lib/librte_reorder.a 00:06:23.288 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:23.288 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:23.853 [191/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:23.853 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:23.853 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:23.853 [194/268] Linking static target lib/librte_security.a 00:06:24.111 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.369 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:24.369 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:24.627 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:24.627 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:24.946 [200/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:24.946 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:24.946 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:25.204 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:25.204 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:25.461 [205/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:25.719 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:25.719 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:25.719 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:25.719 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:25.719 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:25.719 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:25.976 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:25.976 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:25.976 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:25.976 [215/268] Linking static target drivers/librte_bus_vdev.a 00:06:26.234 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:26.234 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:26.234 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:26.234 [219/268] Linking static target drivers/librte_bus_pci.a 00:06:26.492 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:26.492 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:26.492 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:27.056 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:27.056 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:27.056 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:27.056 [226/268] Linking static target drivers/librte_mempool_ring.a 00:06:27.056 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:27.991 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:27.991 [229/268] Linking target lib/librte_eal.so.24.1 00:06:28.250 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:28.250 [231/268] Linking target lib/librte_pci.so.24.1 00:06:28.250 [232/268] Linking target lib/librte_timer.so.24.1 00:06:28.250 [233/268] Linking target lib/librte_ring.so.24.1 00:06:28.250 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:06:28.250 [235/268] Linking target lib/librte_meter.so.24.1 00:06:28.250 [236/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:28.250 [237/268] Linking target lib/librte_dmadev.so.24.1 00:06:28.512 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:28.512 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:28.512 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:28.512 [241/268] Linking target lib/librte_mempool.so.24.1 00:06:28.512 [242/268] Linking target lib/librte_rcu.so.24.1 00:06:28.512 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:28.512 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:28.512 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:06:28.770 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:28.770 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:28.770 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:06:28.770 [249/268] Linking target lib/librte_mbuf.so.24.1 00:06:29.028 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:29.028 [251/268] Linking target lib/librte_reorder.so.24.1 00:06:29.029 [252/268] Linking target lib/librte_net.so.24.1 00:06:29.029 [253/268] Linking target lib/librte_compressdev.so.24.1 00:06:29.029 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:06:29.287 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:29.287 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:29.287 [257/268] Linking target lib/librte_cmdline.so.24.1 00:06:29.287 [258/268] Linking target lib/librte_hash.so.24.1 00:06:29.287 [259/268] Linking target lib/librte_security.so.24.1 00:06:29.545 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:29.803 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:29.803 [262/268] Linking target lib/librte_ethdev.so.24.1 00:06:30.060 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:30.060 [264/268] Linking target lib/librte_power.so.24.1 00:06:33.372 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:33.372 [266/268] Linking static target lib/librte_vhost.a 00:06:34.748 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:34.748 [268/268] Linking target lib/librte_vhost.so.24.1 00:06:34.748 INFO: autodetecting backend as ninja 00:06:34.748 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:01.287 CC lib/ut_mock/mock.o 00:07:01.287 CC lib/ut/ut.o 00:07:01.287 CC lib/log/log_flags.o 00:07:01.287 CC lib/log/log.o 00:07:01.287 CC lib/log/log_deprecated.o 00:07:01.287 LIB libspdk_ut_mock.a 00:07:01.287 LIB libspdk_ut.a 00:07:01.287 LIB libspdk_log.a 00:07:01.287 SO libspdk_ut_mock.so.6.0 00:07:01.287 SO libspdk_ut.so.2.0 00:07:01.287 SO libspdk_log.so.7.1 00:07:01.287 SYMLINK libspdk_ut_mock.so 00:07:01.287 SYMLINK libspdk_ut.so 00:07:01.287 SYMLINK libspdk_log.so 00:07:01.287 CC lib/dma/dma.o 00:07:01.287 CC lib/ioat/ioat.o 00:07:01.287 CC lib/util/bit_array.o 00:07:01.287 CC lib/util/base64.o 00:07:01.287 CC lib/util/cpuset.o 00:07:01.287 CC lib/util/crc16.o 00:07:01.287 CXX lib/trace_parser/trace.o 00:07:01.287 CC lib/util/crc32.o 00:07:01.287 CC lib/util/crc32c.o 00:07:01.287 CC lib/vfio_user/host/vfio_user_pci.o 00:07:01.287 CC lib/util/crc32_ieee.o 00:07:01.287 CC lib/util/crc64.o 00:07:01.287 CC lib/util/dif.o 00:07:01.287 CC lib/util/fd.o 00:07:01.287 LIB libspdk_dma.a 00:07:01.287 CC lib/util/fd_group.o 00:07:01.287 CC lib/util/file.o 00:07:01.287 SO libspdk_dma.so.5.0 00:07:01.287 CC lib/util/hexlify.o 00:07:01.287 CC lib/vfio_user/host/vfio_user.o 00:07:01.287 SYMLINK libspdk_dma.so 00:07:01.287 CC lib/util/iov.o 00:07:01.287 LIB libspdk_ioat.a 00:07:01.287 SO libspdk_ioat.so.7.0 00:07:01.287 CC lib/util/math.o 00:07:01.287 CC lib/util/net.o 00:07:01.287 CC lib/util/pipe.o 00:07:01.287 SYMLINK libspdk_ioat.so 00:07:01.287 CC lib/util/strerror_tls.o 00:07:01.287 CC lib/util/string.o 00:07:01.287 LIB libspdk_vfio_user.a 00:07:01.287 CC lib/util/uuid.o 00:07:01.287 CC lib/util/xor.o 00:07:01.287 SO libspdk_vfio_user.so.5.0 00:07:01.287 CC lib/util/zipf.o 00:07:01.287 CC lib/util/md5.o 00:07:01.287 SYMLINK libspdk_vfio_user.so 00:07:01.287 LIB libspdk_util.a 00:07:01.287 SO libspdk_util.so.10.1 00:07:01.287 LIB libspdk_trace_parser.a 00:07:01.287 SO libspdk_trace_parser.so.6.0 00:07:01.287 SYMLINK libspdk_util.so 00:07:01.545 SYMLINK libspdk_trace_parser.so 00:07:01.545 CC lib/rdma_utils/rdma_utils.o 00:07:01.545 CC lib/env_dpdk/env.o 00:07:01.545 CC lib/env_dpdk/pci.o 00:07:01.545 CC lib/env_dpdk/init.o 00:07:01.545 CC lib/env_dpdk/threads.o 00:07:01.545 CC lib/env_dpdk/memory.o 00:07:01.545 CC lib/idxd/idxd.o 00:07:01.546 CC lib/vmd/vmd.o 00:07:01.546 CC lib/conf/conf.o 00:07:01.546 CC lib/json/json_parse.o 00:07:01.803 CC lib/json/json_util.o 00:07:01.803 LIB libspdk_conf.a 00:07:01.803 SO libspdk_conf.so.6.0 00:07:01.803 LIB libspdk_rdma_utils.a 00:07:01.803 CC lib/json/json_write.o 00:07:01.803 SO libspdk_rdma_utils.so.1.0 00:07:02.061 SYMLINK libspdk_conf.so 00:07:02.061 SYMLINK libspdk_rdma_utils.so 00:07:02.061 CC lib/idxd/idxd_user.o 00:07:02.061 CC lib/idxd/idxd_kernel.o 00:07:02.061 CC lib/env_dpdk/pci_ioat.o 00:07:02.061 CC lib/vmd/led.o 00:07:02.061 CC lib/env_dpdk/pci_virtio.o 00:07:02.061 CC lib/env_dpdk/pci_vmd.o 00:07:02.061 CC lib/env_dpdk/pci_idxd.o 00:07:02.320 LIB libspdk_json.a 00:07:02.320 CC lib/env_dpdk/pci_event.o 00:07:02.320 CC lib/env_dpdk/sigbus_handler.o 00:07:02.320 SO libspdk_json.so.6.0 00:07:02.320 CC lib/env_dpdk/pci_dpdk.o 00:07:02.320 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:02.320 SYMLINK libspdk_json.so 00:07:02.320 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:02.320 CC lib/rdma_provider/common.o 00:07:02.320 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:02.320 LIB libspdk_idxd.a 00:07:02.320 SO libspdk_idxd.so.12.1 00:07:02.320 LIB libspdk_vmd.a 00:07:02.320 SO libspdk_vmd.so.6.0 00:07:02.577 SYMLINK libspdk_idxd.so 00:07:02.577 SYMLINK libspdk_vmd.so 00:07:02.577 CC lib/jsonrpc/jsonrpc_server.o 00:07:02.577 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:02.577 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:02.578 CC lib/jsonrpc/jsonrpc_client.o 00:07:02.578 LIB libspdk_rdma_provider.a 00:07:02.578 SO libspdk_rdma_provider.so.7.0 00:07:02.836 SYMLINK libspdk_rdma_provider.so 00:07:02.836 LIB libspdk_jsonrpc.a 00:07:03.094 SO libspdk_jsonrpc.so.6.0 00:07:03.094 SYMLINK libspdk_jsonrpc.so 00:07:03.352 CC lib/rpc/rpc.o 00:07:03.610 LIB libspdk_env_dpdk.a 00:07:03.610 SO libspdk_env_dpdk.so.15.1 00:07:03.610 LIB libspdk_rpc.a 00:07:03.869 SO libspdk_rpc.so.6.0 00:07:03.869 SYMLINK libspdk_env_dpdk.so 00:07:03.870 SYMLINK libspdk_rpc.so 00:07:04.128 CC lib/trace/trace.o 00:07:04.128 CC lib/trace/trace_flags.o 00:07:04.128 CC lib/notify/notify.o 00:07:04.128 CC lib/notify/notify_rpc.o 00:07:04.128 CC lib/trace/trace_rpc.o 00:07:04.128 CC lib/keyring/keyring_rpc.o 00:07:04.128 CC lib/keyring/keyring.o 00:07:04.386 LIB libspdk_notify.a 00:07:04.386 SO libspdk_notify.so.6.0 00:07:04.386 SYMLINK libspdk_notify.so 00:07:04.732 LIB libspdk_keyring.a 00:07:04.732 LIB libspdk_trace.a 00:07:04.732 SO libspdk_keyring.so.2.0 00:07:04.732 SO libspdk_trace.so.11.0 00:07:04.732 SYMLINK libspdk_keyring.so 00:07:04.732 SYMLINK libspdk_trace.so 00:07:04.991 CC lib/sock/sock_rpc.o 00:07:04.991 CC lib/sock/sock.o 00:07:04.991 CC lib/thread/iobuf.o 00:07:04.991 CC lib/thread/thread.o 00:07:05.560 LIB libspdk_sock.a 00:07:05.560 SO libspdk_sock.so.10.0 00:07:05.820 SYMLINK libspdk_sock.so 00:07:06.078 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:06.078 CC lib/nvme/nvme_ctrlr.o 00:07:06.078 CC lib/nvme/nvme_fabric.o 00:07:06.078 CC lib/nvme/nvme_ns.o 00:07:06.078 CC lib/nvme/nvme_ns_cmd.o 00:07:06.078 CC lib/nvme/nvme_pcie.o 00:07:06.078 CC lib/nvme/nvme_qpair.o 00:07:06.078 CC lib/nvme/nvme_pcie_common.o 00:07:06.078 CC lib/nvme/nvme.o 00:07:07.017 CC lib/nvme/nvme_quirks.o 00:07:07.017 CC lib/nvme/nvme_transport.o 00:07:07.017 CC lib/nvme/nvme_discovery.o 00:07:07.017 LIB libspdk_thread.a 00:07:07.017 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:07.017 SO libspdk_thread.so.11.0 00:07:07.017 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:07.017 CC lib/nvme/nvme_tcp.o 00:07:07.017 SYMLINK libspdk_thread.so 00:07:07.017 CC lib/nvme/nvme_opal.o 00:07:07.276 CC lib/nvme/nvme_io_msg.o 00:07:07.276 CC lib/nvme/nvme_poll_group.o 00:07:07.277 CC lib/nvme/nvme_zns.o 00:07:07.536 CC lib/nvme/nvme_stubs.o 00:07:07.536 CC lib/nvme/nvme_auth.o 00:07:07.536 CC lib/nvme/nvme_cuse.o 00:07:07.796 CC lib/nvme/nvme_rdma.o 00:07:07.796 CC lib/accel/accel.o 00:07:07.796 CC lib/blob/blobstore.o 00:07:08.056 CC lib/blob/request.o 00:07:08.316 CC lib/init/json_config.o 00:07:08.316 CC lib/virtio/virtio.o 00:07:08.316 CC lib/init/subsystem.o 00:07:08.576 CC lib/init/subsystem_rpc.o 00:07:08.576 CC lib/init/rpc.o 00:07:08.576 CC lib/blob/zeroes.o 00:07:08.576 CC lib/virtio/virtio_vhost_user.o 00:07:08.576 CC lib/fsdev/fsdev.o 00:07:08.576 CC lib/blob/blob_bs_dev.o 00:07:08.834 LIB libspdk_init.a 00:07:08.834 SO libspdk_init.so.6.0 00:07:08.834 CC lib/accel/accel_rpc.o 00:07:08.834 CC lib/accel/accel_sw.o 00:07:08.834 SYMLINK libspdk_init.so 00:07:08.834 CC lib/fsdev/fsdev_io.o 00:07:08.834 CC lib/fsdev/fsdev_rpc.o 00:07:09.094 CC lib/virtio/virtio_vfio_user.o 00:07:09.094 CC lib/virtio/virtio_pci.o 00:07:09.094 CC lib/event/reactor.o 00:07:09.094 CC lib/event/app.o 00:07:09.094 CC lib/event/log_rpc.o 00:07:09.094 LIB libspdk_accel.a 00:07:09.094 SO libspdk_accel.so.16.0 00:07:09.353 CC lib/event/app_rpc.o 00:07:09.353 CC lib/event/scheduler_static.o 00:07:09.353 SYMLINK libspdk_accel.so 00:07:09.353 LIB libspdk_virtio.a 00:07:09.353 LIB libspdk_nvme.a 00:07:09.353 SO libspdk_virtio.so.7.0 00:07:09.353 SYMLINK libspdk_virtio.so 00:07:09.353 CC lib/bdev/bdev.o 00:07:09.353 CC lib/bdev/bdev_rpc.o 00:07:09.353 CC lib/bdev/bdev_zone.o 00:07:09.353 LIB libspdk_fsdev.a 00:07:09.353 CC lib/bdev/part.o 00:07:09.613 SO libspdk_nvme.so.15.0 00:07:09.613 CC lib/bdev/scsi_nvme.o 00:07:09.613 SO libspdk_fsdev.so.2.0 00:07:09.613 SYMLINK libspdk_fsdev.so 00:07:09.613 LIB libspdk_event.a 00:07:09.613 SO libspdk_event.so.14.0 00:07:09.873 SYMLINK libspdk_event.so 00:07:09.873 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:09.873 SYMLINK libspdk_nvme.so 00:07:10.443 LIB libspdk_fuse_dispatcher.a 00:07:10.703 SO libspdk_fuse_dispatcher.so.1.0 00:07:10.703 SYMLINK libspdk_fuse_dispatcher.so 00:07:12.080 LIB libspdk_blob.a 00:07:12.080 SO libspdk_blob.so.12.0 00:07:12.081 SYMLINK libspdk_blob.so 00:07:12.648 CC lib/blobfs/blobfs.o 00:07:12.648 CC lib/blobfs/tree.o 00:07:12.648 CC lib/lvol/lvol.o 00:07:12.906 LIB libspdk_bdev.a 00:07:13.164 SO libspdk_bdev.so.17.0 00:07:13.164 SYMLINK libspdk_bdev.so 00:07:13.423 CC lib/scsi/dev.o 00:07:13.423 CC lib/scsi/port.o 00:07:13.423 CC lib/scsi/scsi.o 00:07:13.423 CC lib/scsi/lun.o 00:07:13.423 CC lib/ublk/ublk.o 00:07:13.423 CC lib/nbd/nbd.o 00:07:13.423 CC lib/nvmf/ctrlr.o 00:07:13.423 CC lib/ftl/ftl_core.o 00:07:13.682 LIB libspdk_blobfs.a 00:07:13.682 CC lib/nvmf/ctrlr_discovery.o 00:07:13.682 CC lib/nvmf/ctrlr_bdev.o 00:07:13.682 SO libspdk_blobfs.so.11.0 00:07:13.682 CC lib/nvmf/subsystem.o 00:07:13.682 SYMLINK libspdk_blobfs.so 00:07:13.682 CC lib/nbd/nbd_rpc.o 00:07:13.682 LIB libspdk_lvol.a 00:07:13.940 CC lib/scsi/scsi_bdev.o 00:07:13.941 SO libspdk_lvol.so.11.0 00:07:13.941 SYMLINK libspdk_lvol.so 00:07:13.941 CC lib/ublk/ublk_rpc.o 00:07:13.941 CC lib/nvmf/nvmf.o 00:07:13.941 CC lib/ftl/ftl_init.o 00:07:14.199 LIB libspdk_nbd.a 00:07:14.199 CC lib/ftl/ftl_layout.o 00:07:14.199 SO libspdk_nbd.so.7.0 00:07:14.199 SYMLINK libspdk_nbd.so 00:07:14.199 CC lib/ftl/ftl_debug.o 00:07:14.199 CC lib/ftl/ftl_io.o 00:07:14.199 CC lib/nvmf/nvmf_rpc.o 00:07:14.199 LIB libspdk_ublk.a 00:07:14.457 SO libspdk_ublk.so.3.0 00:07:14.457 SYMLINK libspdk_ublk.so 00:07:14.457 CC lib/nvmf/transport.o 00:07:14.457 CC lib/scsi/scsi_pr.o 00:07:14.457 CC lib/ftl/ftl_sb.o 00:07:14.457 CC lib/ftl/ftl_l2p.o 00:07:14.457 CC lib/nvmf/tcp.o 00:07:14.457 CC lib/ftl/ftl_l2p_flat.o 00:07:14.716 CC lib/ftl/ftl_nv_cache.o 00:07:14.716 CC lib/nvmf/stubs.o 00:07:14.716 CC lib/nvmf/mdns_server.o 00:07:14.975 CC lib/scsi/scsi_rpc.o 00:07:14.975 CC lib/scsi/task.o 00:07:14.975 CC lib/nvmf/rdma.o 00:07:15.234 CC lib/nvmf/auth.o 00:07:15.234 LIB libspdk_scsi.a 00:07:15.234 CC lib/ftl/ftl_band.o 00:07:15.234 CC lib/ftl/ftl_band_ops.o 00:07:15.234 CC lib/ftl/ftl_writer.o 00:07:15.493 SO libspdk_scsi.so.9.0 00:07:15.493 CC lib/ftl/ftl_rq.o 00:07:15.493 SYMLINK libspdk_scsi.so 00:07:15.493 CC lib/ftl/ftl_reloc.o 00:07:15.749 CC lib/iscsi/conn.o 00:07:15.749 CC lib/ftl/ftl_l2p_cache.o 00:07:15.749 CC lib/iscsi/init_grp.o 00:07:16.007 CC lib/iscsi/iscsi.o 00:07:16.007 CC lib/iscsi/param.o 00:07:16.007 CC lib/vhost/vhost.o 00:07:16.007 CC lib/ftl/ftl_p2l.o 00:07:16.266 CC lib/iscsi/portal_grp.o 00:07:16.266 CC lib/vhost/vhost_rpc.o 00:07:16.525 CC lib/iscsi/tgt_node.o 00:07:16.525 CC lib/iscsi/iscsi_subsystem.o 00:07:16.525 CC lib/iscsi/iscsi_rpc.o 00:07:16.525 CC lib/ftl/ftl_p2l_log.o 00:07:16.784 CC lib/iscsi/task.o 00:07:16.784 CC lib/ftl/mngt/ftl_mngt.o 00:07:17.043 CC lib/vhost/vhost_scsi.o 00:07:17.043 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:17.043 CC lib/vhost/vhost_blk.o 00:07:17.043 CC lib/vhost/rte_vhost_user.o 00:07:17.043 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:17.302 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:17.302 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:17.302 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:17.561 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:17.561 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:17.561 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:17.561 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:17.819 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:17.819 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:17.819 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:17.819 CC lib/ftl/utils/ftl_conf.o 00:07:18.078 CC lib/ftl/utils/ftl_md.o 00:07:18.078 LIB libspdk_iscsi.a 00:07:18.078 CC lib/ftl/utils/ftl_mempool.o 00:07:18.078 CC lib/ftl/utils/ftl_bitmap.o 00:07:18.078 SO libspdk_iscsi.so.8.0 00:07:18.078 CC lib/ftl/utils/ftl_property.o 00:07:18.078 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:18.337 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:18.337 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:18.337 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:18.337 SYMLINK libspdk_iscsi.so 00:07:18.337 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:18.337 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:18.594 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:18.594 LIB libspdk_vhost.a 00:07:18.594 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:18.594 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:18.594 LIB libspdk_nvmf.a 00:07:18.594 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:18.594 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:18.594 SO libspdk_vhost.so.8.0 00:07:18.594 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:18.853 SO libspdk_nvmf.so.20.0 00:07:18.853 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:18.853 CC lib/ftl/base/ftl_base_dev.o 00:07:18.853 CC lib/ftl/base/ftl_base_bdev.o 00:07:18.853 SYMLINK libspdk_vhost.so 00:07:18.853 CC lib/ftl/ftl_trace.o 00:07:19.111 SYMLINK libspdk_nvmf.so 00:07:19.111 LIB libspdk_ftl.a 00:07:19.371 SO libspdk_ftl.so.9.0 00:07:19.632 SYMLINK libspdk_ftl.so 00:07:20.202 CC module/env_dpdk/env_dpdk_rpc.o 00:07:20.202 CC module/scheduler/gscheduler/gscheduler.o 00:07:20.202 CC module/accel/ioat/accel_ioat.o 00:07:20.202 CC module/keyring/file/keyring.o 00:07:20.202 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:20.202 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:20.202 CC module/blob/bdev/blob_bdev.o 00:07:20.202 CC module/sock/posix/posix.o 00:07:20.202 CC module/accel/error/accel_error.o 00:07:20.202 CC module/fsdev/aio/fsdev_aio.o 00:07:20.202 LIB libspdk_env_dpdk_rpc.a 00:07:20.202 SO libspdk_env_dpdk_rpc.so.6.0 00:07:20.202 SYMLINK libspdk_env_dpdk_rpc.so 00:07:20.202 CC module/accel/error/accel_error_rpc.o 00:07:20.202 CC module/keyring/file/keyring_rpc.o 00:07:20.202 LIB libspdk_scheduler_dpdk_governor.a 00:07:20.463 LIB libspdk_scheduler_gscheduler.a 00:07:20.463 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:20.463 SO libspdk_scheduler_gscheduler.so.4.0 00:07:20.463 CC module/accel/ioat/accel_ioat_rpc.o 00:07:20.463 LIB libspdk_scheduler_dynamic.a 00:07:20.463 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:20.463 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:20.463 SO libspdk_scheduler_dynamic.so.4.0 00:07:20.463 SYMLINK libspdk_scheduler_gscheduler.so 00:07:20.463 CC module/fsdev/aio/linux_aio_mgr.o 00:07:20.463 LIB libspdk_accel_error.a 00:07:20.463 LIB libspdk_keyring_file.a 00:07:20.463 LIB libspdk_blob_bdev.a 00:07:20.463 SYMLINK libspdk_scheduler_dynamic.so 00:07:20.463 SO libspdk_accel_error.so.2.0 00:07:20.463 SO libspdk_keyring_file.so.2.0 00:07:20.463 SO libspdk_blob_bdev.so.12.0 00:07:20.463 LIB libspdk_accel_ioat.a 00:07:20.463 SYMLINK libspdk_keyring_file.so 00:07:20.463 SYMLINK libspdk_accel_error.so 00:07:20.463 SO libspdk_accel_ioat.so.6.0 00:07:20.463 CC module/keyring/linux/keyring.o 00:07:20.463 SYMLINK libspdk_blob_bdev.so 00:07:20.463 CC module/keyring/linux/keyring_rpc.o 00:07:20.722 SYMLINK libspdk_accel_ioat.so 00:07:20.722 CC module/accel/dsa/accel_dsa.o 00:07:20.722 CC module/accel/dsa/accel_dsa_rpc.o 00:07:20.722 LIB libspdk_keyring_linux.a 00:07:20.722 CC module/accel/iaa/accel_iaa.o 00:07:20.722 SO libspdk_keyring_linux.so.1.0 00:07:20.722 CC module/bdev/gpt/gpt.o 00:07:20.722 CC module/bdev/delay/vbdev_delay.o 00:07:20.722 CC module/bdev/error/vbdev_error.o 00:07:20.722 CC module/blobfs/bdev/blobfs_bdev.o 00:07:20.722 SYMLINK libspdk_keyring_linux.so 00:07:20.722 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:20.982 CC module/accel/iaa/accel_iaa_rpc.o 00:07:20.982 CC module/bdev/gpt/vbdev_gpt.o 00:07:20.982 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:20.982 LIB libspdk_fsdev_aio.a 00:07:20.982 LIB libspdk_accel_iaa.a 00:07:20.982 LIB libspdk_accel_dsa.a 00:07:20.982 SO libspdk_fsdev_aio.so.1.0 00:07:20.982 SO libspdk_accel_dsa.so.5.0 00:07:20.982 SO libspdk_accel_iaa.so.3.0 00:07:20.982 LIB libspdk_sock_posix.a 00:07:20.982 SO libspdk_sock_posix.so.6.0 00:07:20.982 CC module/bdev/error/vbdev_error_rpc.o 00:07:21.241 SYMLINK libspdk_accel_dsa.so 00:07:21.241 SYMLINK libspdk_fsdev_aio.so 00:07:21.241 SYMLINK libspdk_accel_iaa.so 00:07:21.241 CC module/bdev/lvol/vbdev_lvol.o 00:07:21.241 CC module/bdev/malloc/bdev_malloc.o 00:07:21.241 LIB libspdk_blobfs_bdev.a 00:07:21.241 SYMLINK libspdk_sock_posix.so 00:07:21.241 SO libspdk_blobfs_bdev.so.6.0 00:07:21.241 LIB libspdk_bdev_delay.a 00:07:21.241 LIB libspdk_bdev_gpt.a 00:07:21.241 SO libspdk_bdev_delay.so.6.0 00:07:21.241 CC module/bdev/null/bdev_null.o 00:07:21.241 CC module/bdev/nvme/bdev_nvme.o 00:07:21.241 SO libspdk_bdev_gpt.so.6.0 00:07:21.241 LIB libspdk_bdev_error.a 00:07:21.241 SYMLINK libspdk_blobfs_bdev.so 00:07:21.241 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:21.241 CC module/bdev/passthru/vbdev_passthru.o 00:07:21.241 SO libspdk_bdev_error.so.6.0 00:07:21.241 SYMLINK libspdk_bdev_delay.so 00:07:21.241 CC module/bdev/null/bdev_null_rpc.o 00:07:21.241 CC module/bdev/raid/bdev_raid.o 00:07:21.241 SYMLINK libspdk_bdev_gpt.so 00:07:21.241 CC module/bdev/raid/bdev_raid_rpc.o 00:07:21.500 SYMLINK libspdk_bdev_error.so 00:07:21.500 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:21.500 CC module/bdev/nvme/nvme_rpc.o 00:07:21.500 LIB libspdk_bdev_null.a 00:07:21.500 CC module/bdev/raid/bdev_raid_sb.o 00:07:21.500 SO libspdk_bdev_null.so.6.0 00:07:21.759 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:21.759 LIB libspdk_bdev_passthru.a 00:07:21.759 SYMLINK libspdk_bdev_null.so 00:07:21.759 CC module/bdev/split/vbdev_split.o 00:07:21.759 SO libspdk_bdev_passthru.so.6.0 00:07:21.759 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:21.759 SYMLINK libspdk_bdev_passthru.so 00:07:21.759 CC module/bdev/split/vbdev_split_rpc.o 00:07:21.759 LIB libspdk_bdev_malloc.a 00:07:21.759 CC module/bdev/raid/raid0.o 00:07:22.018 SO libspdk_bdev_malloc.so.6.0 00:07:22.018 CC module/bdev/nvme/bdev_mdns_client.o 00:07:22.018 CC module/bdev/nvme/vbdev_opal.o 00:07:22.018 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:22.018 SYMLINK libspdk_bdev_malloc.so 00:07:22.018 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:22.018 LIB libspdk_bdev_split.a 00:07:22.018 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:22.018 SO libspdk_bdev_split.so.6.0 00:07:22.018 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:22.284 SYMLINK libspdk_bdev_split.so 00:07:22.284 CC module/bdev/raid/raid1.o 00:07:22.284 LIB libspdk_bdev_lvol.a 00:07:22.284 SO libspdk_bdev_lvol.so.6.0 00:07:22.284 CC module/bdev/raid/concat.o 00:07:22.284 SYMLINK libspdk_bdev_lvol.so 00:07:22.284 CC module/bdev/aio/bdev_aio.o 00:07:22.284 CC module/bdev/ftl/bdev_ftl.o 00:07:22.284 CC module/bdev/raid/raid5f.o 00:07:22.284 LIB libspdk_bdev_zone_block.a 00:07:22.544 SO libspdk_bdev_zone_block.so.6.0 00:07:22.544 CC module/bdev/iscsi/bdev_iscsi.o 00:07:22.544 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:22.544 SYMLINK libspdk_bdev_zone_block.so 00:07:22.544 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:22.544 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:22.544 CC module/bdev/aio/bdev_aio_rpc.o 00:07:22.909 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:22.909 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:22.909 LIB libspdk_bdev_aio.a 00:07:22.909 SO libspdk_bdev_aio.so.6.0 00:07:22.909 SYMLINK libspdk_bdev_aio.so 00:07:22.909 LIB libspdk_bdev_iscsi.a 00:07:22.909 LIB libspdk_bdev_ftl.a 00:07:22.909 SO libspdk_bdev_iscsi.so.6.0 00:07:22.909 LIB libspdk_bdev_raid.a 00:07:22.909 SO libspdk_bdev_ftl.so.6.0 00:07:23.169 SYMLINK libspdk_bdev_iscsi.so 00:07:23.169 SO libspdk_bdev_raid.so.6.0 00:07:23.169 SYMLINK libspdk_bdev_ftl.so 00:07:23.169 LIB libspdk_bdev_virtio.a 00:07:23.169 SO libspdk_bdev_virtio.so.6.0 00:07:23.169 SYMLINK libspdk_bdev_raid.so 00:07:23.169 SYMLINK libspdk_bdev_virtio.so 00:07:25.070 LIB libspdk_bdev_nvme.a 00:07:25.070 SO libspdk_bdev_nvme.so.7.1 00:07:25.070 SYMLINK libspdk_bdev_nvme.so 00:07:25.637 CC module/event/subsystems/sock/sock.o 00:07:25.637 CC module/event/subsystems/keyring/keyring.o 00:07:25.637 CC module/event/subsystems/scheduler/scheduler.o 00:07:25.637 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:25.637 CC module/event/subsystems/iobuf/iobuf.o 00:07:25.637 CC module/event/subsystems/fsdev/fsdev.o 00:07:25.637 CC module/event/subsystems/vmd/vmd.o 00:07:25.637 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:25.637 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:25.637 LIB libspdk_event_scheduler.a 00:07:25.637 LIB libspdk_event_fsdev.a 00:07:25.637 LIB libspdk_event_sock.a 00:07:25.637 LIB libspdk_event_iobuf.a 00:07:25.637 SO libspdk_event_scheduler.so.4.0 00:07:25.637 SO libspdk_event_fsdev.so.1.0 00:07:25.896 LIB libspdk_event_vhost_blk.a 00:07:25.896 LIB libspdk_event_keyring.a 00:07:25.896 SO libspdk_event_iobuf.so.3.0 00:07:25.896 SO libspdk_event_sock.so.5.0 00:07:25.896 LIB libspdk_event_vmd.a 00:07:25.896 SO libspdk_event_vhost_blk.so.3.0 00:07:25.896 SO libspdk_event_keyring.so.1.0 00:07:25.896 SYMLINK libspdk_event_scheduler.so 00:07:25.896 SO libspdk_event_vmd.so.6.0 00:07:25.896 SYMLINK libspdk_event_fsdev.so 00:07:25.896 SYMLINK libspdk_event_iobuf.so 00:07:25.896 SYMLINK libspdk_event_sock.so 00:07:25.896 SYMLINK libspdk_event_keyring.so 00:07:25.896 SYMLINK libspdk_event_vhost_blk.so 00:07:25.896 SYMLINK libspdk_event_vmd.so 00:07:26.198 CC module/event/subsystems/accel/accel.o 00:07:26.457 LIB libspdk_event_accel.a 00:07:26.457 SO libspdk_event_accel.so.6.0 00:07:26.457 SYMLINK libspdk_event_accel.so 00:07:27.024 CC module/event/subsystems/bdev/bdev.o 00:07:27.024 LIB libspdk_event_bdev.a 00:07:27.024 SO libspdk_event_bdev.so.6.0 00:07:27.024 SYMLINK libspdk_event_bdev.so 00:07:27.591 CC module/event/subsystems/ublk/ublk.o 00:07:27.591 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:27.591 CC module/event/subsystems/scsi/scsi.o 00:07:27.591 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:27.591 CC module/event/subsystems/nbd/nbd.o 00:07:27.591 LIB libspdk_event_ublk.a 00:07:27.591 LIB libspdk_event_nbd.a 00:07:27.591 LIB libspdk_event_scsi.a 00:07:27.591 SO libspdk_event_ublk.so.3.0 00:07:27.591 SO libspdk_event_nbd.so.6.0 00:07:27.591 SO libspdk_event_scsi.so.6.0 00:07:27.591 SYMLINK libspdk_event_ublk.so 00:07:27.850 SYMLINK libspdk_event_nbd.so 00:07:27.850 SYMLINK libspdk_event_scsi.so 00:07:27.850 LIB libspdk_event_nvmf.a 00:07:27.850 SO libspdk_event_nvmf.so.6.0 00:07:27.850 SYMLINK libspdk_event_nvmf.so 00:07:28.109 CC module/event/subsystems/iscsi/iscsi.o 00:07:28.109 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:28.109 LIB libspdk_event_vhost_scsi.a 00:07:28.368 LIB libspdk_event_iscsi.a 00:07:28.368 SO libspdk_event_vhost_scsi.so.3.0 00:07:28.368 SO libspdk_event_iscsi.so.6.0 00:07:28.368 SYMLINK libspdk_event_vhost_scsi.so 00:07:28.368 SYMLINK libspdk_event_iscsi.so 00:07:28.626 SO libspdk.so.6.0 00:07:28.626 SYMLINK libspdk.so 00:07:28.885 CC app/spdk_lspci/spdk_lspci.o 00:07:28.885 CC app/spdk_nvme_perf/perf.o 00:07:28.885 CC app/trace_record/trace_record.o 00:07:28.885 CXX app/trace/trace.o 00:07:28.885 CC app/spdk_nvme_identify/identify.o 00:07:28.885 CC app/nvmf_tgt/nvmf_main.o 00:07:28.885 CC app/iscsi_tgt/iscsi_tgt.o 00:07:28.885 CC app/spdk_tgt/spdk_tgt.o 00:07:28.885 CC test/thread/poller_perf/poller_perf.o 00:07:28.885 CC examples/util/zipf/zipf.o 00:07:28.885 LINK spdk_lspci 00:07:29.143 LINK nvmf_tgt 00:07:29.143 LINK spdk_trace_record 00:07:29.143 LINK poller_perf 00:07:29.143 LINK iscsi_tgt 00:07:29.143 LINK zipf 00:07:29.143 LINK spdk_tgt 00:07:29.143 LINK spdk_trace 00:07:29.400 CC app/spdk_nvme_discover/discovery_aer.o 00:07:29.400 CC app/spdk_top/spdk_top.o 00:07:29.400 CC test/app/bdev_svc/bdev_svc.o 00:07:29.400 CC test/dma/test_dma/test_dma.o 00:07:29.659 CC app/spdk_dd/spdk_dd.o 00:07:29.659 LINK spdk_nvme_discover 00:07:29.659 CC examples/ioat/perf/perf.o 00:07:29.659 CC examples/ioat/verify/verify.o 00:07:29.659 CC examples/vmd/lsvmd/lsvmd.o 00:07:29.659 LINK bdev_svc 00:07:29.916 LINK ioat_perf 00:07:29.916 LINK verify 00:07:29.916 LINK lsvmd 00:07:30.174 LINK spdk_dd 00:07:30.174 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:30.174 LINK spdk_nvme_identify 00:07:30.174 LINK spdk_nvme_perf 00:07:30.174 LINK test_dma 00:07:30.174 CC examples/idxd/perf/perf.o 00:07:30.174 CC examples/vmd/led/led.o 00:07:30.174 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:30.432 TEST_HEADER include/spdk/accel.h 00:07:30.432 TEST_HEADER include/spdk/accel_module.h 00:07:30.432 TEST_HEADER include/spdk/assert.h 00:07:30.432 TEST_HEADER include/spdk/barrier.h 00:07:30.432 TEST_HEADER include/spdk/base64.h 00:07:30.432 TEST_HEADER include/spdk/bdev.h 00:07:30.432 TEST_HEADER include/spdk/bdev_module.h 00:07:30.432 TEST_HEADER include/spdk/bdev_zone.h 00:07:30.432 TEST_HEADER include/spdk/bit_array.h 00:07:30.432 TEST_HEADER include/spdk/bit_pool.h 00:07:30.432 TEST_HEADER include/spdk/blob_bdev.h 00:07:30.432 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:30.432 TEST_HEADER include/spdk/blobfs.h 00:07:30.432 TEST_HEADER include/spdk/blob.h 00:07:30.432 TEST_HEADER include/spdk/conf.h 00:07:30.432 TEST_HEADER include/spdk/config.h 00:07:30.432 TEST_HEADER include/spdk/cpuset.h 00:07:30.432 TEST_HEADER include/spdk/crc16.h 00:07:30.432 TEST_HEADER include/spdk/crc32.h 00:07:30.432 TEST_HEADER include/spdk/crc64.h 00:07:30.432 TEST_HEADER include/spdk/dif.h 00:07:30.432 TEST_HEADER include/spdk/dma.h 00:07:30.432 TEST_HEADER include/spdk/endian.h 00:07:30.432 TEST_HEADER include/spdk/env_dpdk.h 00:07:30.432 TEST_HEADER include/spdk/env.h 00:07:30.432 TEST_HEADER include/spdk/event.h 00:07:30.432 TEST_HEADER include/spdk/fd_group.h 00:07:30.432 TEST_HEADER include/spdk/fd.h 00:07:30.432 TEST_HEADER include/spdk/file.h 00:07:30.432 TEST_HEADER include/spdk/fsdev.h 00:07:30.432 TEST_HEADER include/spdk/fsdev_module.h 00:07:30.432 TEST_HEADER include/spdk/ftl.h 00:07:30.432 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:30.432 TEST_HEADER include/spdk/gpt_spec.h 00:07:30.432 TEST_HEADER include/spdk/hexlify.h 00:07:30.432 TEST_HEADER include/spdk/histogram_data.h 00:07:30.432 TEST_HEADER include/spdk/idxd.h 00:07:30.432 TEST_HEADER include/spdk/idxd_spec.h 00:07:30.432 TEST_HEADER include/spdk/init.h 00:07:30.432 TEST_HEADER include/spdk/ioat.h 00:07:30.432 TEST_HEADER include/spdk/ioat_spec.h 00:07:30.432 TEST_HEADER include/spdk/iscsi_spec.h 00:07:30.432 TEST_HEADER include/spdk/json.h 00:07:30.432 TEST_HEADER include/spdk/jsonrpc.h 00:07:30.432 TEST_HEADER include/spdk/keyring.h 00:07:30.432 TEST_HEADER include/spdk/keyring_module.h 00:07:30.432 TEST_HEADER include/spdk/likely.h 00:07:30.432 TEST_HEADER include/spdk/log.h 00:07:30.432 TEST_HEADER include/spdk/lvol.h 00:07:30.432 TEST_HEADER include/spdk/md5.h 00:07:30.432 TEST_HEADER include/spdk/memory.h 00:07:30.432 TEST_HEADER include/spdk/mmio.h 00:07:30.432 TEST_HEADER include/spdk/nbd.h 00:07:30.432 TEST_HEADER include/spdk/net.h 00:07:30.432 TEST_HEADER include/spdk/notify.h 00:07:30.432 LINK led 00:07:30.432 TEST_HEADER include/spdk/nvme.h 00:07:30.432 CC test/rpc_client/rpc_client_test.o 00:07:30.432 TEST_HEADER include/spdk/nvme_intel.h 00:07:30.432 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:30.432 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:30.432 TEST_HEADER include/spdk/nvme_spec.h 00:07:30.432 TEST_HEADER include/spdk/nvme_zns.h 00:07:30.432 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:30.432 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:30.432 TEST_HEADER include/spdk/nvmf.h 00:07:30.432 TEST_HEADER include/spdk/nvmf_spec.h 00:07:30.432 TEST_HEADER include/spdk/nvmf_transport.h 00:07:30.432 TEST_HEADER include/spdk/opal.h 00:07:30.432 TEST_HEADER include/spdk/opal_spec.h 00:07:30.432 TEST_HEADER include/spdk/pci_ids.h 00:07:30.432 TEST_HEADER include/spdk/pipe.h 00:07:30.432 CC test/event/event_perf/event_perf.o 00:07:30.432 TEST_HEADER include/spdk/queue.h 00:07:30.432 TEST_HEADER include/spdk/reduce.h 00:07:30.432 TEST_HEADER include/spdk/rpc.h 00:07:30.432 TEST_HEADER include/spdk/scheduler.h 00:07:30.432 TEST_HEADER include/spdk/scsi.h 00:07:30.690 TEST_HEADER include/spdk/scsi_spec.h 00:07:30.691 TEST_HEADER include/spdk/sock.h 00:07:30.691 TEST_HEADER include/spdk/stdinc.h 00:07:30.691 TEST_HEADER include/spdk/string.h 00:07:30.691 TEST_HEADER include/spdk/thread.h 00:07:30.691 TEST_HEADER include/spdk/trace.h 00:07:30.691 TEST_HEADER include/spdk/trace_parser.h 00:07:30.691 TEST_HEADER include/spdk/tree.h 00:07:30.691 TEST_HEADER include/spdk/ublk.h 00:07:30.691 TEST_HEADER include/spdk/util.h 00:07:30.691 TEST_HEADER include/spdk/uuid.h 00:07:30.691 TEST_HEADER include/spdk/version.h 00:07:30.691 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:30.691 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:30.691 TEST_HEADER include/spdk/vhost.h 00:07:30.691 CC test/env/mem_callbacks/mem_callbacks.o 00:07:30.691 TEST_HEADER include/spdk/vmd.h 00:07:30.691 TEST_HEADER include/spdk/xor.h 00:07:30.691 LINK interrupt_tgt 00:07:30.691 TEST_HEADER include/spdk/zipf.h 00:07:30.691 CXX test/cpp_headers/accel.o 00:07:30.691 CC examples/thread/thread/thread_ex.o 00:07:30.691 LINK nvme_fuzz 00:07:30.691 LINK idxd_perf 00:07:30.691 CXX test/cpp_headers/accel_module.o 00:07:30.691 LINK spdk_top 00:07:30.691 LINK rpc_client_test 00:07:30.948 LINK event_perf 00:07:30.948 CC test/env/vtophys/vtophys.o 00:07:30.948 CXX test/cpp_headers/assert.o 00:07:30.948 CC examples/sock/hello_world/hello_sock.o 00:07:30.948 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:30.949 LINK thread 00:07:31.206 CXX test/cpp_headers/barrier.o 00:07:31.206 CC test/event/reactor/reactor.o 00:07:31.206 CC app/fio/nvme/fio_plugin.o 00:07:31.206 CC app/vhost/vhost.o 00:07:31.206 CC test/event/reactor_perf/reactor_perf.o 00:07:31.206 LINK vtophys 00:07:31.206 LINK mem_callbacks 00:07:31.206 LINK hello_sock 00:07:31.464 CXX test/cpp_headers/base64.o 00:07:31.464 LINK reactor 00:07:31.464 LINK reactor_perf 00:07:31.464 CC test/event/app_repeat/app_repeat.o 00:07:31.464 LINK vhost 00:07:31.464 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:31.464 CC test/event/scheduler/scheduler.o 00:07:31.721 CXX test/cpp_headers/bdev.o 00:07:31.721 LINK env_dpdk_post_init 00:07:31.721 CXX test/cpp_headers/bdev_module.o 00:07:31.721 CC test/env/memory/memory_ut.o 00:07:31.721 LINK app_repeat 00:07:31.721 CC examples/accel/perf/accel_perf.o 00:07:31.979 LINK spdk_nvme 00:07:31.979 LINK scheduler 00:07:31.979 CC test/env/pci/pci_ut.o 00:07:31.979 CXX test/cpp_headers/bdev_zone.o 00:07:32.265 CC app/fio/bdev/fio_plugin.o 00:07:32.265 CC test/accel/dif/dif.o 00:07:32.265 CC test/blobfs/mkfs/mkfs.o 00:07:32.265 CXX test/cpp_headers/bit_array.o 00:07:32.523 LINK mkfs 00:07:32.523 CC test/lvol/esnap/esnap.o 00:07:32.523 CC examples/blob/hello_world/hello_blob.o 00:07:32.524 CXX test/cpp_headers/bit_pool.o 00:07:32.524 LINK pci_ut 00:07:32.781 CXX test/cpp_headers/blob_bdev.o 00:07:32.781 LINK accel_perf 00:07:32.781 LINK hello_blob 00:07:32.781 LINK spdk_bdev 00:07:32.781 CXX test/cpp_headers/blobfs_bdev.o 00:07:33.039 CC examples/nvme/hello_world/hello_world.o 00:07:33.039 CC examples/blob/cli/blobcli.o 00:07:33.039 CXX test/cpp_headers/blobfs.o 00:07:33.039 CC test/nvme/aer/aer.o 00:07:33.297 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:33.297 CC examples/bdev/hello_world/hello_bdev.o 00:07:33.297 CXX test/cpp_headers/blob.o 00:07:33.297 LINK hello_world 00:07:33.555 CXX test/cpp_headers/conf.o 00:07:33.555 LINK aer 00:07:33.555 LINK hello_fsdev 00:07:33.555 LINK iscsi_fuzz 00:07:33.812 CXX test/cpp_headers/config.o 00:07:33.812 LINK dif 00:07:33.812 CXX test/cpp_headers/cpuset.o 00:07:33.812 LINK hello_bdev 00:07:33.812 LINK blobcli 00:07:33.812 CC examples/nvme/reconnect/reconnect.o 00:07:33.812 LINK memory_ut 00:07:33.812 CC test/nvme/reset/reset.o 00:07:33.812 CXX test/cpp_headers/crc16.o 00:07:33.812 CC test/nvme/sgl/sgl.o 00:07:34.070 CC test/nvme/e2edp/nvme_dp.o 00:07:34.070 CXX test/cpp_headers/crc32.o 00:07:34.070 CC test/nvme/overhead/overhead.o 00:07:34.071 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:34.329 LINK reset 00:07:34.329 CC test/nvme/err_injection/err_injection.o 00:07:34.329 CC examples/bdev/bdevperf/bdevperf.o 00:07:34.329 CXX test/cpp_headers/crc64.o 00:07:34.329 LINK sgl 00:07:34.329 LINK reconnect 00:07:34.329 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:34.329 CXX test/cpp_headers/dif.o 00:07:34.588 LINK nvme_dp 00:07:34.588 LINK overhead 00:07:34.588 LINK err_injection 00:07:34.588 CXX test/cpp_headers/dma.o 00:07:34.846 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:34.846 CC test/bdev/bdevio/bdevio.o 00:07:34.846 CC test/app/jsoncat/jsoncat.o 00:07:34.846 CC test/app/histogram_perf/histogram_perf.o 00:07:34.846 CC test/app/stub/stub.o 00:07:34.846 CXX test/cpp_headers/endian.o 00:07:34.846 CC test/nvme/startup/startup.o 00:07:34.846 LINK histogram_perf 00:07:34.846 LINK jsoncat 00:07:35.106 LINK stub 00:07:35.106 CXX test/cpp_headers/env_dpdk.o 00:07:35.106 LINK vhost_fuzz 00:07:35.106 LINK startup 00:07:35.106 CXX test/cpp_headers/env.o 00:07:35.106 CC test/nvme/reserve/reserve.o 00:07:35.365 LINK bdevio 00:07:35.365 CC examples/nvme/hotplug/hotplug.o 00:07:35.365 CXX test/cpp_headers/event.o 00:07:35.365 CC examples/nvme/arbitration/arbitration.o 00:07:35.365 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:35.365 LINK bdevperf 00:07:35.365 CC test/nvme/simple_copy/simple_copy.o 00:07:35.366 LINK nvme_manage 00:07:35.366 CXX test/cpp_headers/fd_group.o 00:07:35.366 LINK reserve 00:07:35.625 LINK cmb_copy 00:07:35.625 LINK hotplug 00:07:35.625 CC examples/nvme/abort/abort.o 00:07:35.625 CXX test/cpp_headers/fd.o 00:07:35.626 LINK simple_copy 00:07:35.626 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:35.626 CXX test/cpp_headers/file.o 00:07:35.626 LINK arbitration 00:07:35.626 CXX test/cpp_headers/fsdev.o 00:07:35.626 CC test/nvme/connect_stress/connect_stress.o 00:07:35.885 CC test/nvme/compliance/nvme_compliance.o 00:07:35.885 CC test/nvme/boot_partition/boot_partition.o 00:07:35.885 CXX test/cpp_headers/fsdev_module.o 00:07:35.885 LINK pmr_persistence 00:07:35.885 CC test/nvme/fused_ordering/fused_ordering.o 00:07:35.885 LINK connect_stress 00:07:35.885 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:35.885 CC test/nvme/fdp/fdp.o 00:07:35.885 LINK boot_partition 00:07:35.885 LINK abort 00:07:36.144 CXX test/cpp_headers/ftl.o 00:07:36.144 CXX test/cpp_headers/fuse_dispatcher.o 00:07:36.144 CC test/nvme/cuse/cuse.o 00:07:36.144 LINK fused_ordering 00:07:36.144 LINK doorbell_aers 00:07:36.144 LINK nvme_compliance 00:07:36.144 CXX test/cpp_headers/gpt_spec.o 00:07:36.144 CXX test/cpp_headers/hexlify.o 00:07:36.403 CXX test/cpp_headers/histogram_data.o 00:07:36.403 CXX test/cpp_headers/idxd.o 00:07:36.403 CXX test/cpp_headers/idxd_spec.o 00:07:36.403 LINK fdp 00:07:36.403 CC examples/nvmf/nvmf/nvmf.o 00:07:36.403 CXX test/cpp_headers/init.o 00:07:36.403 CXX test/cpp_headers/ioat.o 00:07:36.403 CXX test/cpp_headers/ioat_spec.o 00:07:36.403 CXX test/cpp_headers/iscsi_spec.o 00:07:36.403 CXX test/cpp_headers/json.o 00:07:36.403 CXX test/cpp_headers/jsonrpc.o 00:07:36.662 CXX test/cpp_headers/keyring.o 00:07:36.662 CXX test/cpp_headers/keyring_module.o 00:07:36.662 CXX test/cpp_headers/likely.o 00:07:36.662 CXX test/cpp_headers/log.o 00:07:36.662 CXX test/cpp_headers/lvol.o 00:07:36.662 CXX test/cpp_headers/md5.o 00:07:36.662 CXX test/cpp_headers/memory.o 00:07:36.662 CXX test/cpp_headers/mmio.o 00:07:36.662 LINK nvmf 00:07:36.662 CXX test/cpp_headers/nbd.o 00:07:36.662 CXX test/cpp_headers/net.o 00:07:36.662 CXX test/cpp_headers/notify.o 00:07:36.662 CXX test/cpp_headers/nvme.o 00:07:36.662 CXX test/cpp_headers/nvme_intel.o 00:07:36.920 CXX test/cpp_headers/nvme_ocssd.o 00:07:36.920 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:36.920 CXX test/cpp_headers/nvme_spec.o 00:07:36.920 CXX test/cpp_headers/nvme_zns.o 00:07:36.920 CXX test/cpp_headers/nvmf_cmd.o 00:07:36.920 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:36.920 CXX test/cpp_headers/nvmf.o 00:07:36.920 CXX test/cpp_headers/nvmf_spec.o 00:07:36.920 CXX test/cpp_headers/nvmf_transport.o 00:07:37.179 CXX test/cpp_headers/opal.o 00:07:37.179 CXX test/cpp_headers/opal_spec.o 00:07:37.179 CXX test/cpp_headers/pci_ids.o 00:07:37.179 CXX test/cpp_headers/pipe.o 00:07:37.179 CXX test/cpp_headers/queue.o 00:07:37.179 CXX test/cpp_headers/reduce.o 00:07:37.179 CXX test/cpp_headers/rpc.o 00:07:37.179 CXX test/cpp_headers/scheduler.o 00:07:37.179 CXX test/cpp_headers/scsi.o 00:07:37.438 CXX test/cpp_headers/scsi_spec.o 00:07:37.438 CXX test/cpp_headers/sock.o 00:07:37.438 CXX test/cpp_headers/stdinc.o 00:07:37.438 CXX test/cpp_headers/string.o 00:07:37.438 CXX test/cpp_headers/thread.o 00:07:37.438 CXX test/cpp_headers/trace.o 00:07:37.438 CXX test/cpp_headers/trace_parser.o 00:07:37.438 CXX test/cpp_headers/tree.o 00:07:37.438 CXX test/cpp_headers/ublk.o 00:07:37.438 CXX test/cpp_headers/util.o 00:07:37.438 CXX test/cpp_headers/uuid.o 00:07:37.438 CXX test/cpp_headers/version.o 00:07:37.438 CXX test/cpp_headers/vfio_user_pci.o 00:07:37.438 CXX test/cpp_headers/vfio_user_spec.o 00:07:37.438 CXX test/cpp_headers/vhost.o 00:07:37.438 CXX test/cpp_headers/vmd.o 00:07:37.438 CXX test/cpp_headers/xor.o 00:07:37.696 CXX test/cpp_headers/zipf.o 00:07:37.696 LINK cuse 00:07:39.598 LINK esnap 00:07:40.168 00:07:40.168 real 1m41.296s 00:07:40.168 user 9m7.528s 00:07:40.168 sys 1m42.372s 00:07:40.168 14:07:10 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:40.168 14:07:10 make -- common/autotest_common.sh@10 -- $ set +x 00:07:40.168 ************************************ 00:07:40.168 END TEST make 00:07:40.168 ************************************ 00:07:40.168 14:07:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:40.168 14:07:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:40.168 14:07:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:40.168 14:07:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.168 14:07:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:40.168 14:07:10 -- pm/common@44 -- $ pid=5466 00:07:40.168 14:07:10 -- pm/common@50 -- $ kill -TERM 5466 00:07:40.168 14:07:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.168 14:07:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:40.168 14:07:10 -- pm/common@44 -- $ pid=5468 00:07:40.168 14:07:10 -- pm/common@50 -- $ kill -TERM 5468 00:07:40.168 14:07:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:40.168 14:07:10 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:40.168 14:07:11 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:40.168 14:07:11 -- common/autotest_common.sh@1693 -- # lcov --version 00:07:40.168 14:07:11 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:40.168 14:07:11 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:40.168 14:07:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.168 14:07:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.168 14:07:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.168 14:07:11 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.168 14:07:11 -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.168 14:07:11 -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.168 14:07:11 -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.168 14:07:11 -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.168 14:07:11 -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.168 14:07:11 -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.168 14:07:11 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.168 14:07:11 -- scripts/common.sh@344 -- # case "$op" in 00:07:40.168 14:07:11 -- scripts/common.sh@345 -- # : 1 00:07:40.168 14:07:11 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.168 14:07:11 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.168 14:07:11 -- scripts/common.sh@365 -- # decimal 1 00:07:40.168 14:07:11 -- scripts/common.sh@353 -- # local d=1 00:07:40.168 14:07:11 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.168 14:07:11 -- scripts/common.sh@355 -- # echo 1 00:07:40.168 14:07:11 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.168 14:07:11 -- scripts/common.sh@366 -- # decimal 2 00:07:40.168 14:07:11 -- scripts/common.sh@353 -- # local d=2 00:07:40.168 14:07:11 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.168 14:07:11 -- scripts/common.sh@355 -- # echo 2 00:07:40.168 14:07:11 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.169 14:07:11 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.169 14:07:11 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.169 14:07:11 -- scripts/common.sh@368 -- # return 0 00:07:40.169 14:07:11 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.169 14:07:11 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:40.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.169 --rc genhtml_branch_coverage=1 00:07:40.169 --rc genhtml_function_coverage=1 00:07:40.169 --rc genhtml_legend=1 00:07:40.169 --rc geninfo_all_blocks=1 00:07:40.169 --rc geninfo_unexecuted_blocks=1 00:07:40.169 00:07:40.169 ' 00:07:40.169 14:07:11 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:40.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.169 --rc genhtml_branch_coverage=1 00:07:40.169 --rc genhtml_function_coverage=1 00:07:40.169 --rc genhtml_legend=1 00:07:40.169 --rc geninfo_all_blocks=1 00:07:40.169 --rc geninfo_unexecuted_blocks=1 00:07:40.169 00:07:40.169 ' 00:07:40.169 14:07:11 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:40.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.169 --rc genhtml_branch_coverage=1 00:07:40.169 --rc genhtml_function_coverage=1 00:07:40.169 --rc genhtml_legend=1 00:07:40.169 --rc geninfo_all_blocks=1 00:07:40.169 --rc geninfo_unexecuted_blocks=1 00:07:40.169 00:07:40.169 ' 00:07:40.169 14:07:11 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:40.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.169 --rc genhtml_branch_coverage=1 00:07:40.169 --rc genhtml_function_coverage=1 00:07:40.169 --rc genhtml_legend=1 00:07:40.169 --rc geninfo_all_blocks=1 00:07:40.169 --rc geninfo_unexecuted_blocks=1 00:07:40.169 00:07:40.169 ' 00:07:40.169 14:07:11 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:40.430 14:07:11 -- nvmf/common.sh@7 -- # uname -s 00:07:40.430 14:07:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.430 14:07:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.430 14:07:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.430 14:07:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.430 14:07:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.430 14:07:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.430 14:07:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.430 14:07:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.430 14:07:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.430 14:07:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.430 14:07:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b99a9277-5741-41d9-98a0-55197f077e50 00:07:40.430 14:07:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=b99a9277-5741-41d9-98a0-55197f077e50 00:07:40.430 14:07:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.430 14:07:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.430 14:07:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:40.430 14:07:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.430 14:07:11 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.430 14:07:11 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.430 14:07:11 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.430 14:07:11 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.430 14:07:11 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.430 14:07:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.430 14:07:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.430 14:07:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.430 14:07:11 -- paths/export.sh@5 -- # export PATH 00:07:40.430 14:07:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.430 14:07:11 -- nvmf/common.sh@51 -- # : 0 00:07:40.430 14:07:11 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:40.430 14:07:11 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:40.430 14:07:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.430 14:07:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.430 14:07:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.430 14:07:11 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:40.430 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:40.430 14:07:11 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:40.430 14:07:11 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:40.430 14:07:11 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:40.430 14:07:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:40.430 14:07:11 -- spdk/autotest.sh@32 -- # uname -s 00:07:40.430 14:07:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:40.430 14:07:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:40.430 14:07:11 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:40.430 14:07:11 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:40.430 14:07:11 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:40.430 14:07:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:40.430 14:07:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:40.430 14:07:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:40.430 14:07:11 -- spdk/autotest.sh@48 -- # udevadm_pid=54625 00:07:40.430 14:07:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:40.430 14:07:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:40.430 14:07:11 -- pm/common@17 -- # local monitor 00:07:40.430 14:07:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.430 14:07:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:40.430 14:07:11 -- pm/common@21 -- # date +%s 00:07:40.430 14:07:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732716431 00:07:40.430 14:07:11 -- pm/common@25 -- # sleep 1 00:07:40.430 14:07:11 -- pm/common@21 -- # date +%s 00:07:40.430 14:07:11 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732716431 00:07:40.430 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732716431_collect-cpu-load.pm.log 00:07:40.430 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732716431_collect-vmstat.pm.log 00:07:41.369 14:07:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:41.369 14:07:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:41.369 14:07:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.369 14:07:12 -- common/autotest_common.sh@10 -- # set +x 00:07:41.369 14:07:12 -- spdk/autotest.sh@59 -- # create_test_list 00:07:41.369 14:07:12 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:41.369 14:07:12 -- common/autotest_common.sh@10 -- # set +x 00:07:41.370 14:07:12 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:41.370 14:07:12 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:41.630 14:07:12 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:41.630 14:07:12 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:41.630 14:07:12 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:41.630 14:07:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:41.630 14:07:12 -- common/autotest_common.sh@1457 -- # uname 00:07:41.630 14:07:12 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:41.630 14:07:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:41.630 14:07:12 -- common/autotest_common.sh@1477 -- # uname 00:07:41.630 14:07:12 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:41.630 14:07:12 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:41.630 14:07:12 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:41.630 lcov: LCOV version 1.15 00:07:41.630 14:07:12 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:59.773 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:59.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:12.015 14:07:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:12.015 14:07:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:12.015 14:07:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.015 14:07:42 -- spdk/autotest.sh@78 -- # rm -f 00:08:12.015 14:07:42 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:12.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:12.952 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:12.952 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:12.952 14:07:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:12.952 14:07:43 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:12.952 14:07:43 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:12.952 14:07:43 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:12.952 14:07:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:12.952 14:07:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:12.952 14:07:43 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:12.952 14:07:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:12.952 14:07:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:12.952 14:07:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:12.952 14:07:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:08:12.952 14:07:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:12.952 14:07:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:12.952 14:07:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:12.952 14:07:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:12.952 14:07:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:08:12.952 14:07:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:08:12.952 14:07:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:12.952 14:07:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:12.952 14:07:43 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:12.952 14:07:43 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:08:12.952 14:07:43 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:08:12.952 14:07:43 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:12.952 14:07:43 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:12.952 14:07:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:12.952 14:07:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:12.952 14:07:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:12.952 14:07:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:12.952 14:07:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:12.952 14:07:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:12.952 No valid GPT data, bailing 00:08:12.953 14:07:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:12.953 14:07:43 -- scripts/common.sh@394 -- # pt= 00:08:12.953 14:07:43 -- scripts/common.sh@395 -- # return 1 00:08:12.953 14:07:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:12.953 1+0 records in 00:08:12.953 1+0 records out 00:08:12.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00640479 s, 164 MB/s 00:08:12.953 14:07:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:12.953 14:07:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:12.953 14:07:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:08:12.953 14:07:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:08:12.953 14:07:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:13.212 No valid GPT data, bailing 00:08:13.212 14:07:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:13.212 14:07:43 -- scripts/common.sh@394 -- # pt= 00:08:13.212 14:07:43 -- scripts/common.sh@395 -- # return 1 00:08:13.212 14:07:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:13.212 1+0 records in 00:08:13.212 1+0 records out 00:08:13.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00438468 s, 239 MB/s 00:08:13.212 14:07:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:13.212 14:07:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:13.212 14:07:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:08:13.212 14:07:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:08:13.212 14:07:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:13.212 No valid GPT data, bailing 00:08:13.212 14:07:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:13.212 14:07:44 -- scripts/common.sh@394 -- # pt= 00:08:13.212 14:07:44 -- scripts/common.sh@395 -- # return 1 00:08:13.212 14:07:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:13.212 1+0 records in 00:08:13.212 1+0 records out 00:08:13.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00360385 s, 291 MB/s 00:08:13.212 14:07:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:13.212 14:07:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:13.212 14:07:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:08:13.212 14:07:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:08:13.212 14:07:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:13.212 No valid GPT data, bailing 00:08:13.212 14:07:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:13.212 14:07:44 -- scripts/common.sh@394 -- # pt= 00:08:13.212 14:07:44 -- scripts/common.sh@395 -- # return 1 00:08:13.212 14:07:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:13.212 1+0 records in 00:08:13.212 1+0 records out 00:08:13.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432806 s, 242 MB/s 00:08:13.212 14:07:44 -- spdk/autotest.sh@105 -- # sync 00:08:13.472 14:07:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:13.472 14:07:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:13.472 14:07:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:16.010 14:07:46 -- spdk/autotest.sh@111 -- # uname -s 00:08:16.010 14:07:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:16.010 14:07:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:16.010 14:07:46 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:16.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:16.946 Hugepages 00:08:16.946 node hugesize free / total 00:08:16.946 node0 1048576kB 0 / 0 00:08:16.946 node0 2048kB 0 / 0 00:08:16.946 00:08:16.946 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:16.946 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:17.206 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:17.206 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:17.206 14:07:48 -- spdk/autotest.sh@117 -- # uname -s 00:08:17.206 14:07:48 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:17.206 14:07:48 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:17.206 14:07:48 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:18.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:18.144 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.144 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:18.144 14:07:49 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:19.550 14:07:50 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:19.550 14:07:50 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:19.550 14:07:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:19.550 14:07:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:19.550 14:07:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:19.550 14:07:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:19.550 14:07:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:19.550 14:07:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:19.550 14:07:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:19.550 14:07:50 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:19.550 14:07:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:19.550 14:07:50 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:19.810 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:19.810 Waiting for block devices as requested 00:08:19.810 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:20.071 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:20.071 14:07:50 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:20.071 14:07:50 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:20.071 14:07:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:20.071 14:07:50 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:08:20.071 14:07:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:20.071 14:07:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:20.071 14:07:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:20.071 14:07:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:08:20.071 14:07:50 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:08:20.071 14:07:50 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:08:20.071 14:07:50 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:08:20.071 14:07:50 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:20.071 14:07:50 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:20.071 14:07:50 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:20.071 14:07:50 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:20.071 14:07:50 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:20.071 14:07:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:08:20.071 14:07:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:20.071 14:07:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:20.071 14:07:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:20.071 14:07:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:20.071 14:07:50 -- common/autotest_common.sh@1543 -- # continue 00:08:20.071 14:07:50 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:20.071 14:07:50 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:20.071 14:07:50 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:08:20.071 14:07:50 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:20.071 14:07:50 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:20.071 14:07:50 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:20.071 14:07:50 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:20.071 14:07:50 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:20.071 14:07:50 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:20.071 14:07:50 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:20.071 14:07:50 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:20.071 14:07:50 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:20.071 14:07:50 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:20.071 14:07:50 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:08:20.071 14:07:50 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:20.071 14:07:50 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:20.071 14:07:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:20.071 14:07:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:20.071 14:07:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:20.071 14:07:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:20.071 14:07:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:20.071 14:07:50 -- common/autotest_common.sh@1543 -- # continue 00:08:20.071 14:07:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:20.071 14:07:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.071 14:07:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.344 14:07:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:20.344 14:07:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:20.344 14:07:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.344 14:07:51 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:20.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:21.187 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:21.187 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:21.187 14:07:52 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:21.187 14:07:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.187 14:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:21.187 14:07:52 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:21.187 14:07:52 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:21.187 14:07:52 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:21.187 14:07:52 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:21.187 14:07:52 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:21.187 14:07:52 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:21.187 14:07:52 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:21.187 14:07:52 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:21.187 14:07:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:21.187 14:07:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:21.187 14:07:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:21.187 14:07:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:21.187 14:07:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:21.446 14:07:52 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:08:21.446 14:07:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:21.446 14:07:52 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:21.446 14:07:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:08:21.446 14:07:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:21.446 14:07:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:21.446 14:07:52 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:21.446 14:07:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:08:21.446 14:07:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:08:21.446 14:07:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:08:21.446 14:07:52 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:21.446 14:07:52 -- common/autotest_common.sh@1572 -- # return 0 00:08:21.446 14:07:52 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:21.446 14:07:52 -- common/autotest_common.sh@1580 -- # return 0 00:08:21.446 14:07:52 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:21.446 14:07:52 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:21.446 14:07:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:21.446 14:07:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:21.446 14:07:52 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:21.446 14:07:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.446 14:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:21.446 14:07:52 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:21.446 14:07:52 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:21.446 14:07:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.446 14:07:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.446 14:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:21.446 ************************************ 00:08:21.446 START TEST env 00:08:21.446 ************************************ 00:08:21.446 14:07:52 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:21.446 * Looking for test storage... 00:08:21.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:21.446 14:07:52 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.446 14:07:52 env -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.446 14:07:52 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.706 14:07:52 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.706 14:07:52 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.706 14:07:52 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.706 14:07:52 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.706 14:07:52 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.706 14:07:52 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.706 14:07:52 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.706 14:07:52 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.706 14:07:52 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.706 14:07:52 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.706 14:07:52 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.706 14:07:52 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.706 14:07:52 env -- scripts/common.sh@344 -- # case "$op" in 00:08:21.706 14:07:52 env -- scripts/common.sh@345 -- # : 1 00:08:21.706 14:07:52 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.706 14:07:52 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.706 14:07:52 env -- scripts/common.sh@365 -- # decimal 1 00:08:21.706 14:07:52 env -- scripts/common.sh@353 -- # local d=1 00:08:21.706 14:07:52 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.706 14:07:52 env -- scripts/common.sh@355 -- # echo 1 00:08:21.706 14:07:52 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.706 14:07:52 env -- scripts/common.sh@366 -- # decimal 2 00:08:21.706 14:07:52 env -- scripts/common.sh@353 -- # local d=2 00:08:21.706 14:07:52 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.706 14:07:52 env -- scripts/common.sh@355 -- # echo 2 00:08:21.706 14:07:52 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.706 14:07:52 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.706 14:07:52 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.706 14:07:52 env -- scripts/common.sh@368 -- # return 0 00:08:21.706 14:07:52 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.706 14:07:52 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.706 --rc genhtml_branch_coverage=1 00:08:21.706 --rc genhtml_function_coverage=1 00:08:21.706 --rc genhtml_legend=1 00:08:21.706 --rc geninfo_all_blocks=1 00:08:21.706 --rc geninfo_unexecuted_blocks=1 00:08:21.706 00:08:21.706 ' 00:08:21.706 14:07:52 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.706 --rc genhtml_branch_coverage=1 00:08:21.706 --rc genhtml_function_coverage=1 00:08:21.706 --rc genhtml_legend=1 00:08:21.706 --rc geninfo_all_blocks=1 00:08:21.706 --rc geninfo_unexecuted_blocks=1 00:08:21.706 00:08:21.706 ' 00:08:21.706 14:07:52 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.706 --rc genhtml_branch_coverage=1 00:08:21.706 --rc genhtml_function_coverage=1 00:08:21.706 --rc genhtml_legend=1 00:08:21.706 --rc geninfo_all_blocks=1 00:08:21.706 --rc geninfo_unexecuted_blocks=1 00:08:21.706 00:08:21.706 ' 00:08:21.706 14:07:52 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.706 --rc genhtml_branch_coverage=1 00:08:21.706 --rc genhtml_function_coverage=1 00:08:21.706 --rc genhtml_legend=1 00:08:21.706 --rc geninfo_all_blocks=1 00:08:21.706 --rc geninfo_unexecuted_blocks=1 00:08:21.706 00:08:21.706 ' 00:08:21.706 14:07:52 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:21.706 14:07:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.706 14:07:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.706 14:07:52 env -- common/autotest_common.sh@10 -- # set +x 00:08:21.706 ************************************ 00:08:21.706 START TEST env_memory 00:08:21.706 ************************************ 00:08:21.706 14:07:52 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:21.706 00:08:21.706 00:08:21.706 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.706 http://cunit.sourceforge.net/ 00:08:21.706 00:08:21.706 00:08:21.706 Suite: memory 00:08:21.706 Test: alloc and free memory map ...[2024-11-27 14:07:52.584384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:21.706 passed 00:08:21.706 Test: mem map translation ...[2024-11-27 14:07:52.630539] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:21.706 [2024-11-27 14:07:52.630680] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:21.706 [2024-11-27 14:07:52.630884] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:21.706 [2024-11-27 14:07:52.630951] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:21.966 passed 00:08:21.966 Test: mem map registration ...[2024-11-27 14:07:52.704998] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:21.966 [2024-11-27 14:07:52.705139] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:21.966 passed 00:08:21.966 Test: mem map adjacent registrations ...passed 00:08:21.966 00:08:21.966 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.966 suites 1 1 n/a 0 0 00:08:21.966 tests 4 4 4 0 0 00:08:21.966 asserts 152 152 152 0 n/a 00:08:21.966 00:08:21.966 Elapsed time = 0.259 seconds 00:08:21.966 00:08:21.966 real 0m0.318s 00:08:21.966 user 0m0.276s 00:08:21.966 sys 0m0.031s 00:08:21.966 14:07:52 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.966 14:07:52 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:21.966 ************************************ 00:08:21.966 END TEST env_memory 00:08:21.966 ************************************ 00:08:21.966 14:07:52 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:21.966 14:07:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.966 14:07:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.966 14:07:52 env -- common/autotest_common.sh@10 -- # set +x 00:08:21.966 ************************************ 00:08:21.966 START TEST env_vtophys 00:08:21.966 ************************************ 00:08:21.966 14:07:52 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:22.225 EAL: lib.eal log level changed from notice to debug 00:08:22.225 EAL: Detected lcore 0 as core 0 on socket 0 00:08:22.225 EAL: Detected lcore 1 as core 0 on socket 0 00:08:22.225 EAL: Detected lcore 2 as core 0 on socket 0 00:08:22.225 EAL: Detected lcore 3 as core 0 on socket 0 00:08:22.225 EAL: Detected lcore 4 as core 0 on socket 0 00:08:22.225 EAL: Detected lcore 5 as core 0 on socket 0 00:08:22.225 EAL: Detected lcore 6 as core 0 on socket 0 00:08:22.225 EAL: Detected lcore 7 as core 0 on socket 0 00:08:22.225 EAL: Detected lcore 8 as core 0 on socket 0 00:08:22.225 EAL: Detected lcore 9 as core 0 on socket 0 00:08:22.225 EAL: Maximum logical cores by configuration: 128 00:08:22.225 EAL: Detected CPU lcores: 10 00:08:22.225 EAL: Detected NUMA nodes: 1 00:08:22.225 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:22.225 EAL: Detected shared linkage of DPDK 00:08:22.225 EAL: No shared files mode enabled, IPC will be disabled 00:08:22.225 EAL: Selected IOVA mode 'PA' 00:08:22.225 EAL: Probing VFIO support... 00:08:22.225 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:22.225 EAL: VFIO modules not loaded, skipping VFIO support... 00:08:22.225 EAL: Ask a virtual area of 0x2e000 bytes 00:08:22.225 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:22.225 EAL: Setting up physically contiguous memory... 00:08:22.225 EAL: Setting maximum number of open files to 524288 00:08:22.225 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:22.225 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:22.225 EAL: Ask a virtual area of 0x61000 bytes 00:08:22.225 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:22.225 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:22.225 EAL: Ask a virtual area of 0x400000000 bytes 00:08:22.225 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:22.225 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:22.225 EAL: Ask a virtual area of 0x61000 bytes 00:08:22.225 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:22.225 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:22.225 EAL: Ask a virtual area of 0x400000000 bytes 00:08:22.225 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:22.225 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:22.225 EAL: Ask a virtual area of 0x61000 bytes 00:08:22.225 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:22.225 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:22.225 EAL: Ask a virtual area of 0x400000000 bytes 00:08:22.225 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:22.225 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:22.225 EAL: Ask a virtual area of 0x61000 bytes 00:08:22.225 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:22.225 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:22.225 EAL: Ask a virtual area of 0x400000000 bytes 00:08:22.225 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:22.225 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:22.225 EAL: Hugepages will be freed exactly as allocated. 00:08:22.225 EAL: No shared files mode enabled, IPC is disabled 00:08:22.225 EAL: No shared files mode enabled, IPC is disabled 00:08:22.225 EAL: TSC frequency is ~2290000 KHz 00:08:22.225 EAL: Main lcore 0 is ready (tid=7f57642e3a40;cpuset=[0]) 00:08:22.225 EAL: Trying to obtain current memory policy. 00:08:22.225 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.225 EAL: Restoring previous memory policy: 0 00:08:22.226 EAL: request: mp_malloc_sync 00:08:22.226 EAL: No shared files mode enabled, IPC is disabled 00:08:22.226 EAL: Heap on socket 0 was expanded by 2MB 00:08:22.226 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:08:22.226 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:22.226 EAL: Mem event callback 'spdk:(nil)' registered 00:08:22.226 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:08:22.226 00:08:22.226 00:08:22.226 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.226 http://cunit.sourceforge.net/ 00:08:22.226 00:08:22.226 00:08:22.226 Suite: components_suite 00:08:22.794 Test: vtophys_malloc_test ...passed 00:08:22.794 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:22.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.794 EAL: Restoring previous memory policy: 4 00:08:22.794 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.794 EAL: request: mp_malloc_sync 00:08:22.794 EAL: No shared files mode enabled, IPC is disabled 00:08:22.794 EAL: Heap on socket 0 was expanded by 4MB 00:08:22.794 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.794 EAL: request: mp_malloc_sync 00:08:22.794 EAL: No shared files mode enabled, IPC is disabled 00:08:22.794 EAL: Heap on socket 0 was shrunk by 4MB 00:08:22.794 EAL: Trying to obtain current memory policy. 00:08:22.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.794 EAL: Restoring previous memory policy: 4 00:08:22.794 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.794 EAL: request: mp_malloc_sync 00:08:22.794 EAL: No shared files mode enabled, IPC is disabled 00:08:22.794 EAL: Heap on socket 0 was expanded by 6MB 00:08:22.794 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.794 EAL: request: mp_malloc_sync 00:08:22.794 EAL: No shared files mode enabled, IPC is disabled 00:08:22.794 EAL: Heap on socket 0 was shrunk by 6MB 00:08:22.794 EAL: Trying to obtain current memory policy. 00:08:22.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.794 EAL: Restoring previous memory policy: 4 00:08:22.794 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.794 EAL: request: mp_malloc_sync 00:08:22.794 EAL: No shared files mode enabled, IPC is disabled 00:08:22.794 EAL: Heap on socket 0 was expanded by 10MB 00:08:22.794 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.794 EAL: request: mp_malloc_sync 00:08:22.794 EAL: No shared files mode enabled, IPC is disabled 00:08:22.794 EAL: Heap on socket 0 was shrunk by 10MB 00:08:22.794 EAL: Trying to obtain current memory policy. 00:08:22.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.794 EAL: Restoring previous memory policy: 4 00:08:22.794 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.794 EAL: request: mp_malloc_sync 00:08:22.794 EAL: No shared files mode enabled, IPC is disabled 00:08:22.794 EAL: Heap on socket 0 was expanded by 18MB 00:08:22.794 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.794 EAL: request: mp_malloc_sync 00:08:22.794 EAL: No shared files mode enabled, IPC is disabled 00:08:22.794 EAL: Heap on socket 0 was shrunk by 18MB 00:08:22.794 EAL: Trying to obtain current memory policy. 00:08:22.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.794 EAL: Restoring previous memory policy: 4 00:08:22.794 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.794 EAL: request: mp_malloc_sync 00:08:22.794 EAL: No shared files mode enabled, IPC is disabled 00:08:22.794 EAL: Heap on socket 0 was expanded by 34MB 00:08:22.794 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.794 EAL: request: mp_malloc_sync 00:08:22.794 EAL: No shared files mode enabled, IPC is disabled 00:08:22.794 EAL: Heap on socket 0 was shrunk by 34MB 00:08:22.794 EAL: Trying to obtain current memory policy. 00:08:22.794 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.054 EAL: Restoring previous memory policy: 4 00:08:23.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.054 EAL: request: mp_malloc_sync 00:08:23.054 EAL: No shared files mode enabled, IPC is disabled 00:08:23.054 EAL: Heap on socket 0 was expanded by 66MB 00:08:23.054 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.054 EAL: request: mp_malloc_sync 00:08:23.054 EAL: No shared files mode enabled, IPC is disabled 00:08:23.054 EAL: Heap on socket 0 was shrunk by 66MB 00:08:23.054 EAL: Trying to obtain current memory policy. 00:08:23.054 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.313 EAL: Restoring previous memory policy: 4 00:08:23.313 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.313 EAL: request: mp_malloc_sync 00:08:23.313 EAL: No shared files mode enabled, IPC is disabled 00:08:23.313 EAL: Heap on socket 0 was expanded by 130MB 00:08:23.313 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.573 EAL: request: mp_malloc_sync 00:08:23.573 EAL: No shared files mode enabled, IPC is disabled 00:08:23.573 EAL: Heap on socket 0 was shrunk by 130MB 00:08:23.573 EAL: Trying to obtain current memory policy. 00:08:23.573 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:23.834 EAL: Restoring previous memory policy: 4 00:08:23.834 EAL: Calling mem event callback 'spdk:(nil)' 00:08:23.834 EAL: request: mp_malloc_sync 00:08:23.834 EAL: No shared files mode enabled, IPC is disabled 00:08:23.834 EAL: Heap on socket 0 was expanded by 258MB 00:08:24.094 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.094 EAL: request: mp_malloc_sync 00:08:24.094 EAL: No shared files mode enabled, IPC is disabled 00:08:24.094 EAL: Heap on socket 0 was shrunk by 258MB 00:08:24.661 EAL: Trying to obtain current memory policy. 00:08:24.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:24.661 EAL: Restoring previous memory policy: 4 00:08:24.661 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.661 EAL: request: mp_malloc_sync 00:08:24.661 EAL: No shared files mode enabled, IPC is disabled 00:08:24.661 EAL: Heap on socket 0 was expanded by 514MB 00:08:25.599 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.859 EAL: request: mp_malloc_sync 00:08:25.859 EAL: No shared files mode enabled, IPC is disabled 00:08:25.859 EAL: Heap on socket 0 was shrunk by 514MB 00:08:26.795 EAL: Trying to obtain current memory policy. 00:08:26.795 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:26.795 EAL: Restoring previous memory policy: 4 00:08:26.795 EAL: Calling mem event callback 'spdk:(nil)' 00:08:26.795 EAL: request: mp_malloc_sync 00:08:26.795 EAL: No shared files mode enabled, IPC is disabled 00:08:26.795 EAL: Heap on socket 0 was expanded by 1026MB 00:08:28.703 EAL: Calling mem event callback 'spdk:(nil)' 00:08:28.962 EAL: request: mp_malloc_sync 00:08:28.962 EAL: No shared files mode enabled, IPC is disabled 00:08:28.962 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:30.886 passed 00:08:30.886 00:08:30.886 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.886 suites 1 1 n/a 0 0 00:08:30.886 tests 2 2 2 0 0 00:08:30.886 asserts 5901 5901 5901 0 n/a 00:08:30.886 00:08:30.886 Elapsed time = 8.257 seconds 00:08:30.886 EAL: Calling mem event callback 'spdk:(nil)' 00:08:30.886 EAL: request: mp_malloc_sync 00:08:30.886 EAL: No shared files mode enabled, IPC is disabled 00:08:30.886 EAL: Heap on socket 0 was shrunk by 2MB 00:08:30.886 EAL: No shared files mode enabled, IPC is disabled 00:08:30.886 EAL: No shared files mode enabled, IPC is disabled 00:08:30.886 EAL: No shared files mode enabled, IPC is disabled 00:08:30.886 00:08:30.886 real 0m8.583s 00:08:30.886 user 0m7.593s 00:08:30.886 sys 0m0.837s 00:08:30.886 14:08:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.886 14:08:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:30.886 ************************************ 00:08:30.886 END TEST env_vtophys 00:08:30.886 ************************************ 00:08:30.886 14:08:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:30.886 14:08:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.886 14:08:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.886 14:08:01 env -- common/autotest_common.sh@10 -- # set +x 00:08:30.886 ************************************ 00:08:30.886 START TEST env_pci 00:08:30.886 ************************************ 00:08:30.886 14:08:01 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:30.886 00:08:30.886 00:08:30.886 CUnit - A unit testing framework for C - Version 2.1-3 00:08:30.886 http://cunit.sourceforge.net/ 00:08:30.886 00:08:30.886 00:08:30.886 Suite: pci 00:08:30.886 Test: pci_hook ...[2024-11-27 14:08:01.580781] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56926 has claimed it 00:08:30.886 passed 00:08:30.886 00:08:30.886 Run Summary: Type Total Ran Passed Failed Inactive 00:08:30.886 suites 1 1 n/a 0 0 00:08:30.886 tests 1 1 1 0 0 00:08:30.886 asserts 25 25 25 0 n/a 00:08:30.886 00:08:30.886 Elapsed time = 0.008 secondsEAL: Cannot find device (10000:00:01.0) 00:08:30.886 EAL: Failed to attach device on primary process 00:08:30.886 00:08:30.886 00:08:30.886 real 0m0.109s 00:08:30.886 user 0m0.046s 00:08:30.886 sys 0m0.062s 00:08:30.886 14:08:01 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.886 14:08:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:30.886 ************************************ 00:08:30.886 END TEST env_pci 00:08:30.886 ************************************ 00:08:30.886 14:08:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:30.886 14:08:01 env -- env/env.sh@15 -- # uname 00:08:30.886 14:08:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:30.886 14:08:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:30.886 14:08:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:30.886 14:08:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:30.886 14:08:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.886 14:08:01 env -- common/autotest_common.sh@10 -- # set +x 00:08:30.886 ************************************ 00:08:30.886 START TEST env_dpdk_post_init 00:08:30.886 ************************************ 00:08:30.886 14:08:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:30.886 EAL: Detected CPU lcores: 10 00:08:30.886 EAL: Detected NUMA nodes: 1 00:08:30.886 EAL: Detected shared linkage of DPDK 00:08:30.886 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:30.886 EAL: Selected IOVA mode 'PA' 00:08:31.147 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:31.147 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:31.147 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:08:31.147 Starting DPDK initialization... 00:08:31.147 Starting SPDK post initialization... 00:08:31.147 SPDK NVMe probe 00:08:31.147 Attaching to 0000:00:10.0 00:08:31.147 Attaching to 0000:00:11.0 00:08:31.147 Attached to 0000:00:10.0 00:08:31.147 Attached to 0000:00:11.0 00:08:31.147 Cleaning up... 00:08:31.147 00:08:31.147 real 0m0.279s 00:08:31.147 user 0m0.090s 00:08:31.147 sys 0m0.090s 00:08:31.147 14:08:01 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.147 14:08:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:31.147 ************************************ 00:08:31.147 END TEST env_dpdk_post_init 00:08:31.147 ************************************ 00:08:31.147 14:08:02 env -- env/env.sh@26 -- # uname 00:08:31.147 14:08:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:31.147 14:08:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:31.147 14:08:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.147 14:08:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.147 14:08:02 env -- common/autotest_common.sh@10 -- # set +x 00:08:31.147 ************************************ 00:08:31.147 START TEST env_mem_callbacks 00:08:31.147 ************************************ 00:08:31.147 14:08:02 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:31.407 EAL: Detected CPU lcores: 10 00:08:31.407 EAL: Detected NUMA nodes: 1 00:08:31.407 EAL: Detected shared linkage of DPDK 00:08:31.407 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:31.407 EAL: Selected IOVA mode 'PA' 00:08:31.407 00:08:31.407 00:08:31.407 CUnit - A unit testing framework for C - Version 2.1-3 00:08:31.407 http://cunit.sourceforge.net/ 00:08:31.407 00:08:31.407 00:08:31.407 Suite: memory 00:08:31.407 Test: test ... 00:08:31.407 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:31.407 register 0x200000200000 2097152 00:08:31.407 malloc 3145728 00:08:31.407 register 0x200000400000 4194304 00:08:31.407 buf 0x2000004fffc0 len 3145728 PASSED 00:08:31.407 malloc 64 00:08:31.407 buf 0x2000004ffec0 len 64 PASSED 00:08:31.407 malloc 4194304 00:08:31.407 register 0x200000800000 6291456 00:08:31.407 buf 0x2000009fffc0 len 4194304 PASSED 00:08:31.407 free 0x2000004fffc0 3145728 00:08:31.407 free 0x2000004ffec0 64 00:08:31.407 unregister 0x200000400000 4194304 PASSED 00:08:31.407 free 0x2000009fffc0 4194304 00:08:31.407 unregister 0x200000800000 6291456 PASSED 00:08:31.407 malloc 8388608 00:08:31.407 register 0x200000400000 10485760 00:08:31.407 buf 0x2000005fffc0 len 8388608 PASSED 00:08:31.407 free 0x2000005fffc0 8388608 00:08:31.407 unregister 0x200000400000 10485760 PASSED 00:08:31.407 passed 00:08:31.407 00:08:31.407 Run Summary: Type Total Ran Passed Failed Inactive 00:08:31.407 suites 1 1 n/a 0 0 00:08:31.407 tests 1 1 1 0 0 00:08:31.407 asserts 15 15 15 0 n/a 00:08:31.407 00:08:31.407 Elapsed time = 0.087 seconds 00:08:31.407 00:08:31.407 real 0m0.285s 00:08:31.407 user 0m0.114s 00:08:31.407 sys 0m0.069s 00:08:31.407 14:08:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.407 14:08:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:31.407 ************************************ 00:08:31.407 END TEST env_mem_callbacks 00:08:31.407 ************************************ 00:08:31.667 ************************************ 00:08:31.667 END TEST env 00:08:31.667 ************************************ 00:08:31.667 00:08:31.667 real 0m10.148s 00:08:31.667 user 0m8.352s 00:08:31.667 sys 0m1.451s 00:08:31.667 14:08:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.667 14:08:02 env -- common/autotest_common.sh@10 -- # set +x 00:08:31.667 14:08:02 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:31.667 14:08:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.667 14:08:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.667 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:08:31.667 ************************************ 00:08:31.667 START TEST rpc 00:08:31.667 ************************************ 00:08:31.667 14:08:02 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:31.667 * Looking for test storage... 00:08:31.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:31.667 14:08:02 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.667 14:08:02 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.667 14:08:02 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.927 14:08:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.927 14:08:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.927 14:08:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.927 14:08:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.927 14:08:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.927 14:08:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.927 14:08:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.927 14:08:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.927 14:08:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.927 14:08:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.927 14:08:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.927 14:08:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:31.927 14:08:02 rpc -- scripts/common.sh@345 -- # : 1 00:08:31.927 14:08:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.927 14:08:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.927 14:08:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:31.927 14:08:02 rpc -- scripts/common.sh@353 -- # local d=1 00:08:31.927 14:08:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.927 14:08:02 rpc -- scripts/common.sh@355 -- # echo 1 00:08:31.927 14:08:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.927 14:08:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:31.927 14:08:02 rpc -- scripts/common.sh@353 -- # local d=2 00:08:31.927 14:08:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.927 14:08:02 rpc -- scripts/common.sh@355 -- # echo 2 00:08:31.927 14:08:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.927 14:08:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.927 14:08:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.927 14:08:02 rpc -- scripts/common.sh@368 -- # return 0 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.927 --rc genhtml_branch_coverage=1 00:08:31.927 --rc genhtml_function_coverage=1 00:08:31.927 --rc genhtml_legend=1 00:08:31.927 --rc geninfo_all_blocks=1 00:08:31.927 --rc geninfo_unexecuted_blocks=1 00:08:31.927 00:08:31.927 ' 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.927 --rc genhtml_branch_coverage=1 00:08:31.927 --rc genhtml_function_coverage=1 00:08:31.927 --rc genhtml_legend=1 00:08:31.927 --rc geninfo_all_blocks=1 00:08:31.927 --rc geninfo_unexecuted_blocks=1 00:08:31.927 00:08:31.927 ' 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.927 --rc genhtml_branch_coverage=1 00:08:31.927 --rc genhtml_function_coverage=1 00:08:31.927 --rc genhtml_legend=1 00:08:31.927 --rc geninfo_all_blocks=1 00:08:31.927 --rc geninfo_unexecuted_blocks=1 00:08:31.927 00:08:31.927 ' 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.927 --rc genhtml_branch_coverage=1 00:08:31.927 --rc genhtml_function_coverage=1 00:08:31.927 --rc genhtml_legend=1 00:08:31.927 --rc geninfo_all_blocks=1 00:08:31.927 --rc geninfo_unexecuted_blocks=1 00:08:31.927 00:08:31.927 ' 00:08:31.927 14:08:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57059 00:08:31.927 14:08:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:31.927 14:08:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:31.927 14:08:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57059 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 57059 ']' 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.927 14:08:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:31.927 [2024-11-27 14:08:02.803601] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:31.927 [2024-11-27 14:08:02.803726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57059 ] 00:08:32.187 [2024-11-27 14:08:02.981616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.187 [2024-11-27 14:08:03.103192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:32.187 [2024-11-27 14:08:03.103255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57059' to capture a snapshot of events at runtime. 00:08:32.187 [2024-11-27 14:08:03.103265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.187 [2024-11-27 14:08:03.103275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.187 [2024-11-27 14:08:03.103283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57059 for offline analysis/debug. 00:08:32.187 [2024-11-27 14:08:03.104517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.143 14:08:04 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.143 14:08:04 rpc -- common/autotest_common.sh@868 -- # return 0 00:08:33.143 14:08:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:33.143 14:08:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:33.143 14:08:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:33.144 14:08:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:33.144 14:08:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.144 14:08:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.144 14:08:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.144 ************************************ 00:08:33.144 START TEST rpc_integrity 00:08:33.144 ************************************ 00:08:33.144 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:33.144 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:33.144 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.144 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:33.144 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.144 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:33.144 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:33.411 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:33.411 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:33.411 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.411 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:33.411 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.411 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:33.411 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:33.411 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.411 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:33.411 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.411 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:33.411 { 00:08:33.411 "name": "Malloc0", 00:08:33.411 "aliases": [ 00:08:33.411 "d4f44f85-bfa1-4347-9c4c-75b0d946ae33" 00:08:33.411 ], 00:08:33.411 "product_name": "Malloc disk", 00:08:33.411 "block_size": 512, 00:08:33.411 "num_blocks": 16384, 00:08:33.411 "uuid": "d4f44f85-bfa1-4347-9c4c-75b0d946ae33", 00:08:33.411 "assigned_rate_limits": { 00:08:33.411 "rw_ios_per_sec": 0, 00:08:33.411 "rw_mbytes_per_sec": 0, 00:08:33.411 "r_mbytes_per_sec": 0, 00:08:33.411 "w_mbytes_per_sec": 0 00:08:33.411 }, 00:08:33.411 "claimed": false, 00:08:33.411 "zoned": false, 00:08:33.411 "supported_io_types": { 00:08:33.411 "read": true, 00:08:33.411 "write": true, 00:08:33.411 "unmap": true, 00:08:33.411 "flush": true, 00:08:33.411 "reset": true, 00:08:33.411 "nvme_admin": false, 00:08:33.411 "nvme_io": false, 00:08:33.411 "nvme_io_md": false, 00:08:33.411 "write_zeroes": true, 00:08:33.411 "zcopy": true, 00:08:33.411 "get_zone_info": false, 00:08:33.411 "zone_management": false, 00:08:33.411 "zone_append": false, 00:08:33.411 "compare": false, 00:08:33.411 "compare_and_write": false, 00:08:33.411 "abort": true, 00:08:33.411 "seek_hole": false, 00:08:33.411 "seek_data": false, 00:08:33.411 "copy": true, 00:08:33.411 "nvme_iov_md": false 00:08:33.411 }, 00:08:33.411 "memory_domains": [ 00:08:33.411 { 00:08:33.411 "dma_device_id": "system", 00:08:33.411 "dma_device_type": 1 00:08:33.411 }, 00:08:33.411 { 00:08:33.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.411 "dma_device_type": 2 00:08:33.411 } 00:08:33.411 ], 00:08:33.411 "driver_specific": {} 00:08:33.411 } 00:08:33.411 ]' 00:08:33.411 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:33.411 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:33.411 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:33.411 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.411 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:33.411 [2024-11-27 14:08:04.201681] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:33.411 [2024-11-27 14:08:04.201756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.411 [2024-11-27 14:08:04.201817] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:33.411 [2024-11-27 14:08:04.201839] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.412 [2024-11-27 14:08:04.204482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.412 [2024-11-27 14:08:04.204528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:33.412 Passthru0 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.412 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.412 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:33.412 { 00:08:33.412 "name": "Malloc0", 00:08:33.412 "aliases": [ 00:08:33.412 "d4f44f85-bfa1-4347-9c4c-75b0d946ae33" 00:08:33.412 ], 00:08:33.412 "product_name": "Malloc disk", 00:08:33.412 "block_size": 512, 00:08:33.412 "num_blocks": 16384, 00:08:33.412 "uuid": "d4f44f85-bfa1-4347-9c4c-75b0d946ae33", 00:08:33.412 "assigned_rate_limits": { 00:08:33.412 "rw_ios_per_sec": 0, 00:08:33.412 "rw_mbytes_per_sec": 0, 00:08:33.412 "r_mbytes_per_sec": 0, 00:08:33.412 "w_mbytes_per_sec": 0 00:08:33.412 }, 00:08:33.412 "claimed": true, 00:08:33.412 "claim_type": "exclusive_write", 00:08:33.412 "zoned": false, 00:08:33.412 "supported_io_types": { 00:08:33.412 "read": true, 00:08:33.412 "write": true, 00:08:33.412 "unmap": true, 00:08:33.412 "flush": true, 00:08:33.412 "reset": true, 00:08:33.412 "nvme_admin": false, 00:08:33.412 "nvme_io": false, 00:08:33.412 "nvme_io_md": false, 00:08:33.412 "write_zeroes": true, 00:08:33.412 "zcopy": true, 00:08:33.412 "get_zone_info": false, 00:08:33.412 "zone_management": false, 00:08:33.412 "zone_append": false, 00:08:33.412 "compare": false, 00:08:33.412 "compare_and_write": false, 00:08:33.412 "abort": true, 00:08:33.412 "seek_hole": false, 00:08:33.412 "seek_data": false, 00:08:33.412 "copy": true, 00:08:33.412 "nvme_iov_md": false 00:08:33.412 }, 00:08:33.412 "memory_domains": [ 00:08:33.412 { 00:08:33.412 "dma_device_id": "system", 00:08:33.412 "dma_device_type": 1 00:08:33.412 }, 00:08:33.412 { 00:08:33.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.412 "dma_device_type": 2 00:08:33.412 } 00:08:33.412 ], 00:08:33.412 "driver_specific": {} 00:08:33.412 }, 00:08:33.412 { 00:08:33.412 "name": "Passthru0", 00:08:33.412 "aliases": [ 00:08:33.412 "34d6fb04-1b12-5161-9ed6-f4db6a9348f0" 00:08:33.412 ], 00:08:33.412 "product_name": "passthru", 00:08:33.412 "block_size": 512, 00:08:33.412 "num_blocks": 16384, 00:08:33.412 "uuid": "34d6fb04-1b12-5161-9ed6-f4db6a9348f0", 00:08:33.412 "assigned_rate_limits": { 00:08:33.412 "rw_ios_per_sec": 0, 00:08:33.412 "rw_mbytes_per_sec": 0, 00:08:33.412 "r_mbytes_per_sec": 0, 00:08:33.412 "w_mbytes_per_sec": 0 00:08:33.412 }, 00:08:33.412 "claimed": false, 00:08:33.412 "zoned": false, 00:08:33.412 "supported_io_types": { 00:08:33.412 "read": true, 00:08:33.412 "write": true, 00:08:33.412 "unmap": true, 00:08:33.412 "flush": true, 00:08:33.412 "reset": true, 00:08:33.412 "nvme_admin": false, 00:08:33.412 "nvme_io": false, 00:08:33.412 "nvme_io_md": false, 00:08:33.412 "write_zeroes": true, 00:08:33.412 "zcopy": true, 00:08:33.412 "get_zone_info": false, 00:08:33.412 "zone_management": false, 00:08:33.412 "zone_append": false, 00:08:33.412 "compare": false, 00:08:33.412 "compare_and_write": false, 00:08:33.412 "abort": true, 00:08:33.412 "seek_hole": false, 00:08:33.412 "seek_data": false, 00:08:33.412 "copy": true, 00:08:33.412 "nvme_iov_md": false 00:08:33.412 }, 00:08:33.412 "memory_domains": [ 00:08:33.412 { 00:08:33.412 "dma_device_id": "system", 00:08:33.412 "dma_device_type": 1 00:08:33.412 }, 00:08:33.412 { 00:08:33.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.412 "dma_device_type": 2 00:08:33.412 } 00:08:33.412 ], 00:08:33.412 "driver_specific": { 00:08:33.412 "passthru": { 00:08:33.412 "name": "Passthru0", 00:08:33.412 "base_bdev_name": "Malloc0" 00:08:33.412 } 00:08:33.412 } 00:08:33.412 } 00:08:33.412 ]' 00:08:33.412 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:33.412 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:33.412 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.412 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.412 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:33.412 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.412 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:33.412 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:33.671 ************************************ 00:08:33.671 END TEST rpc_integrity 00:08:33.671 ************************************ 00:08:33.671 14:08:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:33.671 00:08:33.671 real 0m0.377s 00:08:33.671 user 0m0.210s 00:08:33.671 sys 0m0.058s 00:08:33.671 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.671 14:08:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:33.671 14:08:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:33.671 14:08:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.671 14:08:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.671 14:08:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.671 ************************************ 00:08:33.671 START TEST rpc_plugins 00:08:33.671 ************************************ 00:08:33.671 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:08:33.671 14:08:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:33.671 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.671 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:33.671 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.671 14:08:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:33.671 14:08:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:33.671 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.671 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:33.671 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.671 14:08:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:33.671 { 00:08:33.671 "name": "Malloc1", 00:08:33.671 "aliases": [ 00:08:33.671 "c34dc81f-e477-4d0c-8594-a3f05aba9e51" 00:08:33.671 ], 00:08:33.671 "product_name": "Malloc disk", 00:08:33.671 "block_size": 4096, 00:08:33.671 "num_blocks": 256, 00:08:33.671 "uuid": "c34dc81f-e477-4d0c-8594-a3f05aba9e51", 00:08:33.671 "assigned_rate_limits": { 00:08:33.671 "rw_ios_per_sec": 0, 00:08:33.671 "rw_mbytes_per_sec": 0, 00:08:33.671 "r_mbytes_per_sec": 0, 00:08:33.671 "w_mbytes_per_sec": 0 00:08:33.671 }, 00:08:33.671 "claimed": false, 00:08:33.671 "zoned": false, 00:08:33.671 "supported_io_types": { 00:08:33.671 "read": true, 00:08:33.672 "write": true, 00:08:33.672 "unmap": true, 00:08:33.672 "flush": true, 00:08:33.672 "reset": true, 00:08:33.672 "nvme_admin": false, 00:08:33.672 "nvme_io": false, 00:08:33.672 "nvme_io_md": false, 00:08:33.672 "write_zeroes": true, 00:08:33.672 "zcopy": true, 00:08:33.672 "get_zone_info": false, 00:08:33.672 "zone_management": false, 00:08:33.672 "zone_append": false, 00:08:33.672 "compare": false, 00:08:33.672 "compare_and_write": false, 00:08:33.672 "abort": true, 00:08:33.672 "seek_hole": false, 00:08:33.672 "seek_data": false, 00:08:33.672 "copy": true, 00:08:33.672 "nvme_iov_md": false 00:08:33.672 }, 00:08:33.672 "memory_domains": [ 00:08:33.672 { 00:08:33.672 "dma_device_id": "system", 00:08:33.672 "dma_device_type": 1 00:08:33.672 }, 00:08:33.672 { 00:08:33.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.672 "dma_device_type": 2 00:08:33.672 } 00:08:33.672 ], 00:08:33.672 "driver_specific": {} 00:08:33.672 } 00:08:33.672 ]' 00:08:33.672 14:08:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:33.672 14:08:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:33.672 14:08:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:33.672 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.672 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:33.672 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.672 14:08:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:33.672 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.672 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:33.672 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.672 14:08:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:33.672 14:08:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:33.930 14:08:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:33.930 00:08:33.930 real 0m0.171s 00:08:33.930 user 0m0.098s 00:08:33.930 sys 0m0.026s 00:08:33.930 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.930 14:08:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:33.930 ************************************ 00:08:33.930 END TEST rpc_plugins 00:08:33.930 ************************************ 00:08:33.930 14:08:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:33.930 14:08:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.931 14:08:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.931 14:08:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.931 ************************************ 00:08:33.931 START TEST rpc_trace_cmd_test 00:08:33.931 ************************************ 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:33.931 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57059", 00:08:33.931 "tpoint_group_mask": "0x8", 00:08:33.931 "iscsi_conn": { 00:08:33.931 "mask": "0x2", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "scsi": { 00:08:33.931 "mask": "0x4", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "bdev": { 00:08:33.931 "mask": "0x8", 00:08:33.931 "tpoint_mask": "0xffffffffffffffff" 00:08:33.931 }, 00:08:33.931 "nvmf_rdma": { 00:08:33.931 "mask": "0x10", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "nvmf_tcp": { 00:08:33.931 "mask": "0x20", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "ftl": { 00:08:33.931 "mask": "0x40", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "blobfs": { 00:08:33.931 "mask": "0x80", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "dsa": { 00:08:33.931 "mask": "0x200", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "thread": { 00:08:33.931 "mask": "0x400", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "nvme_pcie": { 00:08:33.931 "mask": "0x800", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "iaa": { 00:08:33.931 "mask": "0x1000", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "nvme_tcp": { 00:08:33.931 "mask": "0x2000", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "bdev_nvme": { 00:08:33.931 "mask": "0x4000", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "sock": { 00:08:33.931 "mask": "0x8000", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "blob": { 00:08:33.931 "mask": "0x10000", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "bdev_raid": { 00:08:33.931 "mask": "0x20000", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 }, 00:08:33.931 "scheduler": { 00:08:33.931 "mask": "0x40000", 00:08:33.931 "tpoint_mask": "0x0" 00:08:33.931 } 00:08:33.931 }' 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:33.931 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:34.190 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:34.190 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:34.190 ************************************ 00:08:34.190 END TEST rpc_trace_cmd_test 00:08:34.190 ************************************ 00:08:34.190 14:08:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:34.190 00:08:34.190 real 0m0.248s 00:08:34.190 user 0m0.203s 00:08:34.190 sys 0m0.036s 00:08:34.190 14:08:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.190 14:08:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.190 14:08:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:34.190 14:08:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:34.190 14:08:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:34.190 14:08:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.190 14:08:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.190 14:08:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:34.190 ************************************ 00:08:34.190 START TEST rpc_daemon_integrity 00:08:34.190 ************************************ 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.190 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:34.190 { 00:08:34.190 "name": "Malloc2", 00:08:34.190 "aliases": [ 00:08:34.190 "8c63171f-381f-4cd5-b348-c526ddd0bed2" 00:08:34.190 ], 00:08:34.190 "product_name": "Malloc disk", 00:08:34.190 "block_size": 512, 00:08:34.190 "num_blocks": 16384, 00:08:34.190 "uuid": "8c63171f-381f-4cd5-b348-c526ddd0bed2", 00:08:34.190 "assigned_rate_limits": { 00:08:34.190 "rw_ios_per_sec": 0, 00:08:34.190 "rw_mbytes_per_sec": 0, 00:08:34.190 "r_mbytes_per_sec": 0, 00:08:34.190 "w_mbytes_per_sec": 0 00:08:34.190 }, 00:08:34.190 "claimed": false, 00:08:34.190 "zoned": false, 00:08:34.190 "supported_io_types": { 00:08:34.190 "read": true, 00:08:34.190 "write": true, 00:08:34.190 "unmap": true, 00:08:34.190 "flush": true, 00:08:34.190 "reset": true, 00:08:34.190 "nvme_admin": false, 00:08:34.190 "nvme_io": false, 00:08:34.190 "nvme_io_md": false, 00:08:34.190 "write_zeroes": true, 00:08:34.191 "zcopy": true, 00:08:34.191 "get_zone_info": false, 00:08:34.191 "zone_management": false, 00:08:34.191 "zone_append": false, 00:08:34.191 "compare": false, 00:08:34.191 "compare_and_write": false, 00:08:34.191 "abort": true, 00:08:34.191 "seek_hole": false, 00:08:34.191 "seek_data": false, 00:08:34.191 "copy": true, 00:08:34.191 "nvme_iov_md": false 00:08:34.191 }, 00:08:34.191 "memory_domains": [ 00:08:34.191 { 00:08:34.191 "dma_device_id": "system", 00:08:34.191 "dma_device_type": 1 00:08:34.191 }, 00:08:34.191 { 00:08:34.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.191 "dma_device_type": 2 00:08:34.191 } 00:08:34.191 ], 00:08:34.191 "driver_specific": {} 00:08:34.191 } 00:08:34.191 ]' 00:08:34.191 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:34.449 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:34.449 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:34.449 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.449 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.450 [2024-11-27 14:08:05.188688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:34.450 [2024-11-27 14:08:05.188840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.450 [2024-11-27 14:08:05.188869] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:34.450 [2024-11-27 14:08:05.188881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.450 [2024-11-27 14:08:05.191431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.450 [2024-11-27 14:08:05.191487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:34.450 Passthru0 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:34.450 { 00:08:34.450 "name": "Malloc2", 00:08:34.450 "aliases": [ 00:08:34.450 "8c63171f-381f-4cd5-b348-c526ddd0bed2" 00:08:34.450 ], 00:08:34.450 "product_name": "Malloc disk", 00:08:34.450 "block_size": 512, 00:08:34.450 "num_blocks": 16384, 00:08:34.450 "uuid": "8c63171f-381f-4cd5-b348-c526ddd0bed2", 00:08:34.450 "assigned_rate_limits": { 00:08:34.450 "rw_ios_per_sec": 0, 00:08:34.450 "rw_mbytes_per_sec": 0, 00:08:34.450 "r_mbytes_per_sec": 0, 00:08:34.450 "w_mbytes_per_sec": 0 00:08:34.450 }, 00:08:34.450 "claimed": true, 00:08:34.450 "claim_type": "exclusive_write", 00:08:34.450 "zoned": false, 00:08:34.450 "supported_io_types": { 00:08:34.450 "read": true, 00:08:34.450 "write": true, 00:08:34.450 "unmap": true, 00:08:34.450 "flush": true, 00:08:34.450 "reset": true, 00:08:34.450 "nvme_admin": false, 00:08:34.450 "nvme_io": false, 00:08:34.450 "nvme_io_md": false, 00:08:34.450 "write_zeroes": true, 00:08:34.450 "zcopy": true, 00:08:34.450 "get_zone_info": false, 00:08:34.450 "zone_management": false, 00:08:34.450 "zone_append": false, 00:08:34.450 "compare": false, 00:08:34.450 "compare_and_write": false, 00:08:34.450 "abort": true, 00:08:34.450 "seek_hole": false, 00:08:34.450 "seek_data": false, 00:08:34.450 "copy": true, 00:08:34.450 "nvme_iov_md": false 00:08:34.450 }, 00:08:34.450 "memory_domains": [ 00:08:34.450 { 00:08:34.450 "dma_device_id": "system", 00:08:34.450 "dma_device_type": 1 00:08:34.450 }, 00:08:34.450 { 00:08:34.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.450 "dma_device_type": 2 00:08:34.450 } 00:08:34.450 ], 00:08:34.450 "driver_specific": {} 00:08:34.450 }, 00:08:34.450 { 00:08:34.450 "name": "Passthru0", 00:08:34.450 "aliases": [ 00:08:34.450 "ad7ed8b3-b0cd-57bc-9855-5508019ec0ed" 00:08:34.450 ], 00:08:34.450 "product_name": "passthru", 00:08:34.450 "block_size": 512, 00:08:34.450 "num_blocks": 16384, 00:08:34.450 "uuid": "ad7ed8b3-b0cd-57bc-9855-5508019ec0ed", 00:08:34.450 "assigned_rate_limits": { 00:08:34.450 "rw_ios_per_sec": 0, 00:08:34.450 "rw_mbytes_per_sec": 0, 00:08:34.450 "r_mbytes_per_sec": 0, 00:08:34.450 "w_mbytes_per_sec": 0 00:08:34.450 }, 00:08:34.450 "claimed": false, 00:08:34.450 "zoned": false, 00:08:34.450 "supported_io_types": { 00:08:34.450 "read": true, 00:08:34.450 "write": true, 00:08:34.450 "unmap": true, 00:08:34.450 "flush": true, 00:08:34.450 "reset": true, 00:08:34.450 "nvme_admin": false, 00:08:34.450 "nvme_io": false, 00:08:34.450 "nvme_io_md": false, 00:08:34.450 "write_zeroes": true, 00:08:34.450 "zcopy": true, 00:08:34.450 "get_zone_info": false, 00:08:34.450 "zone_management": false, 00:08:34.450 "zone_append": false, 00:08:34.450 "compare": false, 00:08:34.450 "compare_and_write": false, 00:08:34.450 "abort": true, 00:08:34.450 "seek_hole": false, 00:08:34.450 "seek_data": false, 00:08:34.450 "copy": true, 00:08:34.450 "nvme_iov_md": false 00:08:34.450 }, 00:08:34.450 "memory_domains": [ 00:08:34.450 { 00:08:34.450 "dma_device_id": "system", 00:08:34.450 "dma_device_type": 1 00:08:34.450 }, 00:08:34.450 { 00:08:34.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.450 "dma_device_type": 2 00:08:34.450 } 00:08:34.450 ], 00:08:34.450 "driver_specific": { 00:08:34.450 "passthru": { 00:08:34.450 "name": "Passthru0", 00:08:34.450 "base_bdev_name": "Malloc2" 00:08:34.450 } 00:08:34.450 } 00:08:34.450 } 00:08:34.450 ]' 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:34.450 00:08:34.450 real 0m0.372s 00:08:34.450 user 0m0.207s 00:08:34.450 sys 0m0.057s 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.450 14:08:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:34.450 ************************************ 00:08:34.450 END TEST rpc_daemon_integrity 00:08:34.450 ************************************ 00:08:34.709 14:08:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:34.709 14:08:05 rpc -- rpc/rpc.sh@84 -- # killprocess 57059 00:08:34.709 14:08:05 rpc -- common/autotest_common.sh@954 -- # '[' -z 57059 ']' 00:08:34.709 14:08:05 rpc -- common/autotest_common.sh@958 -- # kill -0 57059 00:08:34.709 14:08:05 rpc -- common/autotest_common.sh@959 -- # uname 00:08:34.709 14:08:05 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.709 14:08:05 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57059 00:08:34.709 killing process with pid 57059 00:08:34.709 14:08:05 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.709 14:08:05 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.709 14:08:05 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57059' 00:08:34.709 14:08:05 rpc -- common/autotest_common.sh@973 -- # kill 57059 00:08:34.709 14:08:05 rpc -- common/autotest_common.sh@978 -- # wait 57059 00:08:37.242 00:08:37.242 real 0m5.528s 00:08:37.242 user 0m6.140s 00:08:37.242 sys 0m0.947s 00:08:37.242 14:08:08 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.242 ************************************ 00:08:37.242 END TEST rpc 00:08:37.242 ************************************ 00:08:37.242 14:08:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.242 14:08:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:37.242 14:08:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.242 14:08:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.242 14:08:08 -- common/autotest_common.sh@10 -- # set +x 00:08:37.242 ************************************ 00:08:37.242 START TEST skip_rpc 00:08:37.242 ************************************ 00:08:37.242 14:08:08 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:08:37.242 * Looking for test storage... 00:08:37.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:37.242 14:08:08 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.242 14:08:08 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.242 14:08:08 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.499 14:08:08 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.499 14:08:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:37.499 14:08:08 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.499 14:08:08 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.500 --rc genhtml_branch_coverage=1 00:08:37.500 --rc genhtml_function_coverage=1 00:08:37.500 --rc genhtml_legend=1 00:08:37.500 --rc geninfo_all_blocks=1 00:08:37.500 --rc geninfo_unexecuted_blocks=1 00:08:37.500 00:08:37.500 ' 00:08:37.500 14:08:08 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.500 --rc genhtml_branch_coverage=1 00:08:37.500 --rc genhtml_function_coverage=1 00:08:37.500 --rc genhtml_legend=1 00:08:37.500 --rc geninfo_all_blocks=1 00:08:37.500 --rc geninfo_unexecuted_blocks=1 00:08:37.500 00:08:37.500 ' 00:08:37.500 14:08:08 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.500 --rc genhtml_branch_coverage=1 00:08:37.500 --rc genhtml_function_coverage=1 00:08:37.500 --rc genhtml_legend=1 00:08:37.500 --rc geninfo_all_blocks=1 00:08:37.500 --rc geninfo_unexecuted_blocks=1 00:08:37.500 00:08:37.500 ' 00:08:37.500 14:08:08 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.500 --rc genhtml_branch_coverage=1 00:08:37.500 --rc genhtml_function_coverage=1 00:08:37.500 --rc genhtml_legend=1 00:08:37.500 --rc geninfo_all_blocks=1 00:08:37.500 --rc geninfo_unexecuted_blocks=1 00:08:37.500 00:08:37.500 ' 00:08:37.500 14:08:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:37.500 14:08:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:37.500 14:08:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:37.500 14:08:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.500 14:08:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.500 14:08:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.500 ************************************ 00:08:37.500 START TEST skip_rpc 00:08:37.500 ************************************ 00:08:37.500 14:08:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:08:37.500 14:08:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57288 00:08:37.500 14:08:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:37.500 14:08:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:37.500 14:08:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:37.500 [2024-11-27 14:08:08.390708] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:37.500 [2024-11-27 14:08:08.390853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57288 ] 00:08:37.758 [2024-11-27 14:08:08.568892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.758 [2024-11-27 14:08:08.689031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57288 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57288 ']' 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57288 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57288 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57288' 00:08:43.063 killing process with pid 57288 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57288 00:08:43.063 14:08:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57288 00:08:44.974 00:08:44.974 real 0m7.504s 00:08:44.974 user 0m7.045s 00:08:44.974 sys 0m0.379s 00:08:44.974 14:08:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.974 14:08:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.974 ************************************ 00:08:44.974 END TEST skip_rpc 00:08:44.974 ************************************ 00:08:44.974 14:08:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:44.974 14:08:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.974 14:08:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.974 14:08:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.974 ************************************ 00:08:44.974 START TEST skip_rpc_with_json 00:08:44.974 ************************************ 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57392 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57392 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57392 ']' 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.974 14:08:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:45.234 [2024-11-27 14:08:15.967717] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:45.234 [2024-11-27 14:08:15.967839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57392 ] 00:08:45.234 [2024-11-27 14:08:16.139179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.493 [2024-11-27 14:08:16.267367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:46.432 [2024-11-27 14:08:17.137075] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:46.432 request: 00:08:46.432 { 00:08:46.432 "trtype": "tcp", 00:08:46.432 "method": "nvmf_get_transports", 00:08:46.432 "req_id": 1 00:08:46.432 } 00:08:46.432 Got JSON-RPC error response 00:08:46.432 response: 00:08:46.432 { 00:08:46.432 "code": -19, 00:08:46.432 "message": "No such device" 00:08:46.432 } 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:46.432 [2024-11-27 14:08:17.149198] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.432 14:08:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:46.432 { 00:08:46.432 "subsystems": [ 00:08:46.432 { 00:08:46.432 "subsystem": "fsdev", 00:08:46.432 "config": [ 00:08:46.432 { 00:08:46.432 "method": "fsdev_set_opts", 00:08:46.432 "params": { 00:08:46.432 "fsdev_io_pool_size": 65535, 00:08:46.432 "fsdev_io_cache_size": 256 00:08:46.432 } 00:08:46.432 } 00:08:46.432 ] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "keyring", 00:08:46.432 "config": [] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "iobuf", 00:08:46.432 "config": [ 00:08:46.432 { 00:08:46.432 "method": "iobuf_set_options", 00:08:46.432 "params": { 00:08:46.432 "small_pool_count": 8192, 00:08:46.432 "large_pool_count": 1024, 00:08:46.432 "small_bufsize": 8192, 00:08:46.432 "large_bufsize": 135168, 00:08:46.432 "enable_numa": false 00:08:46.432 } 00:08:46.432 } 00:08:46.432 ] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "sock", 00:08:46.432 "config": [ 00:08:46.432 { 00:08:46.432 "method": "sock_set_default_impl", 00:08:46.432 "params": { 00:08:46.432 "impl_name": "posix" 00:08:46.432 } 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "method": "sock_impl_set_options", 00:08:46.432 "params": { 00:08:46.432 "impl_name": "ssl", 00:08:46.432 "recv_buf_size": 4096, 00:08:46.432 "send_buf_size": 4096, 00:08:46.432 "enable_recv_pipe": true, 00:08:46.432 "enable_quickack": false, 00:08:46.432 "enable_placement_id": 0, 00:08:46.432 "enable_zerocopy_send_server": true, 00:08:46.432 "enable_zerocopy_send_client": false, 00:08:46.432 "zerocopy_threshold": 0, 00:08:46.432 "tls_version": 0, 00:08:46.432 "enable_ktls": false 00:08:46.432 } 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "method": "sock_impl_set_options", 00:08:46.432 "params": { 00:08:46.432 "impl_name": "posix", 00:08:46.432 "recv_buf_size": 2097152, 00:08:46.432 "send_buf_size": 2097152, 00:08:46.432 "enable_recv_pipe": true, 00:08:46.432 "enable_quickack": false, 00:08:46.432 "enable_placement_id": 0, 00:08:46.432 "enable_zerocopy_send_server": true, 00:08:46.432 "enable_zerocopy_send_client": false, 00:08:46.432 "zerocopy_threshold": 0, 00:08:46.432 "tls_version": 0, 00:08:46.432 "enable_ktls": false 00:08:46.432 } 00:08:46.432 } 00:08:46.432 ] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "vmd", 00:08:46.432 "config": [] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "accel", 00:08:46.432 "config": [ 00:08:46.432 { 00:08:46.432 "method": "accel_set_options", 00:08:46.432 "params": { 00:08:46.432 "small_cache_size": 128, 00:08:46.432 "large_cache_size": 16, 00:08:46.432 "task_count": 2048, 00:08:46.432 "sequence_count": 2048, 00:08:46.432 "buf_count": 2048 00:08:46.432 } 00:08:46.432 } 00:08:46.432 ] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "bdev", 00:08:46.432 "config": [ 00:08:46.432 { 00:08:46.432 "method": "bdev_set_options", 00:08:46.432 "params": { 00:08:46.432 "bdev_io_pool_size": 65535, 00:08:46.432 "bdev_io_cache_size": 256, 00:08:46.432 "bdev_auto_examine": true, 00:08:46.432 "iobuf_small_cache_size": 128, 00:08:46.432 "iobuf_large_cache_size": 16 00:08:46.432 } 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "method": "bdev_raid_set_options", 00:08:46.432 "params": { 00:08:46.432 "process_window_size_kb": 1024, 00:08:46.432 "process_max_bandwidth_mb_sec": 0 00:08:46.432 } 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "method": "bdev_iscsi_set_options", 00:08:46.432 "params": { 00:08:46.432 "timeout_sec": 30 00:08:46.432 } 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "method": "bdev_nvme_set_options", 00:08:46.432 "params": { 00:08:46.432 "action_on_timeout": "none", 00:08:46.432 "timeout_us": 0, 00:08:46.432 "timeout_admin_us": 0, 00:08:46.432 "keep_alive_timeout_ms": 10000, 00:08:46.432 "arbitration_burst": 0, 00:08:46.432 "low_priority_weight": 0, 00:08:46.432 "medium_priority_weight": 0, 00:08:46.432 "high_priority_weight": 0, 00:08:46.432 "nvme_adminq_poll_period_us": 10000, 00:08:46.432 "nvme_ioq_poll_period_us": 0, 00:08:46.432 "io_queue_requests": 0, 00:08:46.432 "delay_cmd_submit": true, 00:08:46.432 "transport_retry_count": 4, 00:08:46.432 "bdev_retry_count": 3, 00:08:46.432 "transport_ack_timeout": 0, 00:08:46.432 "ctrlr_loss_timeout_sec": 0, 00:08:46.432 "reconnect_delay_sec": 0, 00:08:46.432 "fast_io_fail_timeout_sec": 0, 00:08:46.432 "disable_auto_failback": false, 00:08:46.432 "generate_uuids": false, 00:08:46.432 "transport_tos": 0, 00:08:46.432 "nvme_error_stat": false, 00:08:46.432 "rdma_srq_size": 0, 00:08:46.432 "io_path_stat": false, 00:08:46.432 "allow_accel_sequence": false, 00:08:46.432 "rdma_max_cq_size": 0, 00:08:46.432 "rdma_cm_event_timeout_ms": 0, 00:08:46.432 "dhchap_digests": [ 00:08:46.432 "sha256", 00:08:46.432 "sha384", 00:08:46.432 "sha512" 00:08:46.432 ], 00:08:46.432 "dhchap_dhgroups": [ 00:08:46.432 "null", 00:08:46.432 "ffdhe2048", 00:08:46.432 "ffdhe3072", 00:08:46.432 "ffdhe4096", 00:08:46.432 "ffdhe6144", 00:08:46.432 "ffdhe8192" 00:08:46.432 ] 00:08:46.432 } 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "method": "bdev_nvme_set_hotplug", 00:08:46.432 "params": { 00:08:46.432 "period_us": 100000, 00:08:46.432 "enable": false 00:08:46.432 } 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "method": "bdev_wait_for_examine" 00:08:46.432 } 00:08:46.432 ] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "scsi", 00:08:46.432 "config": null 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "scheduler", 00:08:46.432 "config": [ 00:08:46.432 { 00:08:46.432 "method": "framework_set_scheduler", 00:08:46.432 "params": { 00:08:46.432 "name": "static" 00:08:46.432 } 00:08:46.432 } 00:08:46.432 ] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "vhost_scsi", 00:08:46.432 "config": [] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "vhost_blk", 00:08:46.432 "config": [] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "ublk", 00:08:46.432 "config": [] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "nbd", 00:08:46.432 "config": [] 00:08:46.432 }, 00:08:46.432 { 00:08:46.432 "subsystem": "nvmf", 00:08:46.432 "config": [ 00:08:46.432 { 00:08:46.432 "method": "nvmf_set_config", 00:08:46.432 "params": { 00:08:46.433 "discovery_filter": "match_any", 00:08:46.433 "admin_cmd_passthru": { 00:08:46.433 "identify_ctrlr": false 00:08:46.433 }, 00:08:46.433 "dhchap_digests": [ 00:08:46.433 "sha256", 00:08:46.433 "sha384", 00:08:46.433 "sha512" 00:08:46.433 ], 00:08:46.433 "dhchap_dhgroups": [ 00:08:46.433 "null", 00:08:46.433 "ffdhe2048", 00:08:46.433 "ffdhe3072", 00:08:46.433 "ffdhe4096", 00:08:46.433 "ffdhe6144", 00:08:46.433 "ffdhe8192" 00:08:46.433 ] 00:08:46.433 } 00:08:46.433 }, 00:08:46.433 { 00:08:46.433 "method": "nvmf_set_max_subsystems", 00:08:46.433 "params": { 00:08:46.433 "max_subsystems": 1024 00:08:46.433 } 00:08:46.433 }, 00:08:46.433 { 00:08:46.433 "method": "nvmf_set_crdt", 00:08:46.433 "params": { 00:08:46.433 "crdt1": 0, 00:08:46.433 "crdt2": 0, 00:08:46.433 "crdt3": 0 00:08:46.433 } 00:08:46.433 }, 00:08:46.433 { 00:08:46.433 "method": "nvmf_create_transport", 00:08:46.433 "params": { 00:08:46.433 "trtype": "TCP", 00:08:46.433 "max_queue_depth": 128, 00:08:46.433 "max_io_qpairs_per_ctrlr": 127, 00:08:46.433 "in_capsule_data_size": 4096, 00:08:46.433 "max_io_size": 131072, 00:08:46.433 "io_unit_size": 131072, 00:08:46.433 "max_aq_depth": 128, 00:08:46.433 "num_shared_buffers": 511, 00:08:46.433 "buf_cache_size": 4294967295, 00:08:46.433 "dif_insert_or_strip": false, 00:08:46.433 "zcopy": false, 00:08:46.433 "c2h_success": true, 00:08:46.433 "sock_priority": 0, 00:08:46.433 "abort_timeout_sec": 1, 00:08:46.433 "ack_timeout": 0, 00:08:46.433 "data_wr_pool_size": 0 00:08:46.433 } 00:08:46.433 } 00:08:46.433 ] 00:08:46.433 }, 00:08:46.433 { 00:08:46.433 "subsystem": "iscsi", 00:08:46.433 "config": [ 00:08:46.433 { 00:08:46.433 "method": "iscsi_set_options", 00:08:46.433 "params": { 00:08:46.433 "node_base": "iqn.2016-06.io.spdk", 00:08:46.433 "max_sessions": 128, 00:08:46.433 "max_connections_per_session": 2, 00:08:46.433 "max_queue_depth": 64, 00:08:46.433 "default_time2wait": 2, 00:08:46.433 "default_time2retain": 20, 00:08:46.433 "first_burst_length": 8192, 00:08:46.433 "immediate_data": true, 00:08:46.433 "allow_duplicated_isid": false, 00:08:46.433 "error_recovery_level": 0, 00:08:46.433 "nop_timeout": 60, 00:08:46.433 "nop_in_interval": 30, 00:08:46.433 "disable_chap": false, 00:08:46.433 "require_chap": false, 00:08:46.433 "mutual_chap": false, 00:08:46.433 "chap_group": 0, 00:08:46.433 "max_large_datain_per_connection": 64, 00:08:46.433 "max_r2t_per_connection": 4, 00:08:46.433 "pdu_pool_size": 36864, 00:08:46.433 "immediate_data_pool_size": 16384, 00:08:46.433 "data_out_pool_size": 2048 00:08:46.433 } 00:08:46.433 } 00:08:46.433 ] 00:08:46.433 } 00:08:46.433 ] 00:08:46.433 } 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57392 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57392 ']' 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57392 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57392 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.433 killing process with pid 57392 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57392' 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57392 00:08:46.433 14:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57392 00:08:48.969 14:08:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57448 00:08:48.969 14:08:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:48.969 14:08:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:54.247 14:08:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57448 00:08:54.247 14:08:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57448 ']' 00:08:54.247 14:08:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57448 00:08:54.247 14:08:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:54.247 14:08:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.247 14:08:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57448 00:08:54.247 14:08:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.247 14:08:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.247 killing process with pid 57448 00:08:54.247 14:08:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57448' 00:08:54.247 14:08:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57448 00:08:54.247 14:08:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57448 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:56.783 00:08:56.783 real 0m11.535s 00:08:56.783 user 0m10.961s 00:08:56.783 sys 0m0.878s 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:56.783 ************************************ 00:08:56.783 END TEST skip_rpc_with_json 00:08:56.783 ************************************ 00:08:56.783 14:08:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:56.783 14:08:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.783 14:08:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.783 14:08:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.783 ************************************ 00:08:56.783 START TEST skip_rpc_with_delay 00:08:56.783 ************************************ 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:56.783 [2024-11-27 14:08:27.560285] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:56.783 00:08:56.783 real 0m0.168s 00:08:56.783 user 0m0.097s 00:08:56.783 sys 0m0.069s 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.783 14:08:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:56.783 ************************************ 00:08:56.783 END TEST skip_rpc_with_delay 00:08:56.783 ************************************ 00:08:56.783 14:08:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:56.783 14:08:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:56.783 14:08:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:56.783 14:08:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.783 14:08:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.783 14:08:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.783 ************************************ 00:08:56.783 START TEST exit_on_failed_rpc_init 00:08:56.783 ************************************ 00:08:56.783 14:08:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:56.783 14:08:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57587 00:08:56.783 14:08:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:56.783 14:08:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57587 00:08:56.783 14:08:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57587 ']' 00:08:56.783 14:08:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.783 14:08:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.783 14:08:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.783 14:08:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.783 14:08:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:57.042 [2024-11-27 14:08:27.790081] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:57.042 [2024-11-27 14:08:27.790223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57587 ] 00:08:57.042 [2024-11-27 14:08:27.952647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.301 [2024-11-27 14:08:28.083960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:58.235 14:08:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:58.235 [2024-11-27 14:08:29.099207] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:08:58.235 [2024-11-27 14:08:29.099342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57605 ] 00:08:58.494 [2024-11-27 14:08:29.277088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.494 [2024-11-27 14:08:29.414105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.494 [2024-11-27 14:08:29.414224] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:58.494 [2024-11-27 14:08:29.414244] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:58.494 [2024-11-27 14:08:29.414261] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57587 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57587 ']' 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57587 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.754 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57587 00:08:59.015 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.015 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.015 killing process with pid 57587 00:08:59.015 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57587' 00:08:59.015 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57587 00:08:59.015 14:08:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57587 00:09:01.548 00:09:01.548 real 0m4.560s 00:09:01.548 user 0m4.955s 00:09:01.548 sys 0m0.554s 00:09:01.548 14:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.548 14:08:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:01.548 ************************************ 00:09:01.548 END TEST exit_on_failed_rpc_init 00:09:01.548 ************************************ 00:09:01.548 14:08:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:01.548 ************************************ 00:09:01.548 END TEST skip_rpc 00:09:01.548 ************************************ 00:09:01.548 00:09:01.548 real 0m24.235s 00:09:01.548 user 0m23.261s 00:09:01.548 sys 0m2.168s 00:09:01.548 14:08:32 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.548 14:08:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.548 14:08:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:01.548 14:08:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.548 14:08:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.548 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:09:01.548 ************************************ 00:09:01.548 START TEST rpc_client 00:09:01.548 ************************************ 00:09:01.548 14:08:32 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:01.548 * Looking for test storage... 00:09:01.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:01.548 14:08:32 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.548 14:08:32 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.548 14:08:32 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.807 14:08:32 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.807 14:08:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:01.807 14:08:32 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.807 14:08:32 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.807 --rc genhtml_branch_coverage=1 00:09:01.807 --rc genhtml_function_coverage=1 00:09:01.807 --rc genhtml_legend=1 00:09:01.807 --rc geninfo_all_blocks=1 00:09:01.807 --rc geninfo_unexecuted_blocks=1 00:09:01.807 00:09:01.807 ' 00:09:01.807 14:08:32 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.807 --rc genhtml_branch_coverage=1 00:09:01.807 --rc genhtml_function_coverage=1 00:09:01.807 --rc genhtml_legend=1 00:09:01.807 --rc geninfo_all_blocks=1 00:09:01.807 --rc geninfo_unexecuted_blocks=1 00:09:01.807 00:09:01.807 ' 00:09:01.807 14:08:32 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.807 --rc genhtml_branch_coverage=1 00:09:01.807 --rc genhtml_function_coverage=1 00:09:01.807 --rc genhtml_legend=1 00:09:01.807 --rc geninfo_all_blocks=1 00:09:01.807 --rc geninfo_unexecuted_blocks=1 00:09:01.807 00:09:01.807 ' 00:09:01.807 14:08:32 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.807 --rc genhtml_branch_coverage=1 00:09:01.807 --rc genhtml_function_coverage=1 00:09:01.807 --rc genhtml_legend=1 00:09:01.807 --rc geninfo_all_blocks=1 00:09:01.807 --rc geninfo_unexecuted_blocks=1 00:09:01.807 00:09:01.807 ' 00:09:01.807 14:08:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:01.807 OK 00:09:01.807 14:08:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:01.807 00:09:01.807 real 0m0.285s 00:09:01.807 user 0m0.157s 00:09:01.807 sys 0m0.145s 00:09:01.807 14:08:32 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.807 14:08:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:01.807 ************************************ 00:09:01.807 END TEST rpc_client 00:09:01.807 ************************************ 00:09:01.807 14:08:32 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:01.807 14:08:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.807 14:08:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.807 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:09:01.807 ************************************ 00:09:01.807 START TEST json_config 00:09:01.807 ************************************ 00:09:01.807 14:08:32 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:02.067 14:08:32 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:02.067 14:08:32 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:09:02.067 14:08:32 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:02.067 14:08:32 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:02.067 14:08:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.067 14:08:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.067 14:08:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.067 14:08:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.067 14:08:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.067 14:08:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.067 14:08:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.067 14:08:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.067 14:08:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.067 14:08:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.067 14:08:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.067 14:08:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:02.067 14:08:32 json_config -- scripts/common.sh@345 -- # : 1 00:09:02.067 14:08:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.067 14:08:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.067 14:08:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:02.067 14:08:32 json_config -- scripts/common.sh@353 -- # local d=1 00:09:02.067 14:08:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.067 14:08:32 json_config -- scripts/common.sh@355 -- # echo 1 00:09:02.067 14:08:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.067 14:08:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:02.067 14:08:32 json_config -- scripts/common.sh@353 -- # local d=2 00:09:02.067 14:08:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.067 14:08:32 json_config -- scripts/common.sh@355 -- # echo 2 00:09:02.067 14:08:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.067 14:08:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.067 14:08:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.067 14:08:32 json_config -- scripts/common.sh@368 -- # return 0 00:09:02.067 14:08:32 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.067 14:08:32 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:02.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.067 --rc genhtml_branch_coverage=1 00:09:02.067 --rc genhtml_function_coverage=1 00:09:02.067 --rc genhtml_legend=1 00:09:02.067 --rc geninfo_all_blocks=1 00:09:02.067 --rc geninfo_unexecuted_blocks=1 00:09:02.067 00:09:02.067 ' 00:09:02.067 14:08:32 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:02.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.067 --rc genhtml_branch_coverage=1 00:09:02.067 --rc genhtml_function_coverage=1 00:09:02.067 --rc genhtml_legend=1 00:09:02.067 --rc geninfo_all_blocks=1 00:09:02.067 --rc geninfo_unexecuted_blocks=1 00:09:02.067 00:09:02.067 ' 00:09:02.067 14:08:32 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:02.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.067 --rc genhtml_branch_coverage=1 00:09:02.067 --rc genhtml_function_coverage=1 00:09:02.067 --rc genhtml_legend=1 00:09:02.067 --rc geninfo_all_blocks=1 00:09:02.067 --rc geninfo_unexecuted_blocks=1 00:09:02.067 00:09:02.067 ' 00:09:02.067 14:08:32 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:02.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.067 --rc genhtml_branch_coverage=1 00:09:02.067 --rc genhtml_function_coverage=1 00:09:02.067 --rc genhtml_legend=1 00:09:02.068 --rc geninfo_all_blocks=1 00:09:02.068 --rc geninfo_unexecuted_blocks=1 00:09:02.068 00:09:02.068 ' 00:09:02.068 14:08:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b99a9277-5741-41d9-98a0-55197f077e50 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b99a9277-5741-41d9-98a0-55197f077e50 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.068 14:08:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.068 14:08:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.068 14:08:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.068 14:08:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.068 14:08:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.068 14:08:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.068 14:08:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.068 14:08:32 json_config -- paths/export.sh@5 -- # export PATH 00:09:02.068 14:08:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@51 -- # : 0 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.068 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.068 14:08:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.068 14:08:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:02.068 14:08:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:02.068 14:08:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:02.068 14:08:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:02.068 14:08:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:02.068 WARNING: No tests are enabled so not running JSON configuration tests 00:09:02.068 14:08:32 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:09:02.068 14:08:32 json_config -- json_config/json_config.sh@28 -- # exit 0 00:09:02.068 00:09:02.068 real 0m0.216s 00:09:02.068 user 0m0.129s 00:09:02.068 sys 0m0.095s 00:09:02.068 14:08:32 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.068 14:08:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:02.068 ************************************ 00:09:02.068 END TEST json_config 00:09:02.068 ************************************ 00:09:02.068 14:08:32 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:02.068 14:08:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.068 14:08:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.068 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.068 ************************************ 00:09:02.068 START TEST json_config_extra_key 00:09:02.068 ************************************ 00:09:02.068 14:08:32 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:02.328 14:08:33 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:02.328 14:08:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:09:02.328 14:08:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:02.328 14:08:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.328 14:08:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:02.329 14:08:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:02.329 14:08:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.329 14:08:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:02.329 14:08:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.329 14:08:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.329 14:08:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.329 14:08:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:02.329 14:08:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.329 14:08:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:02.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.329 --rc genhtml_branch_coverage=1 00:09:02.329 --rc genhtml_function_coverage=1 00:09:02.329 --rc genhtml_legend=1 00:09:02.329 --rc geninfo_all_blocks=1 00:09:02.329 --rc geninfo_unexecuted_blocks=1 00:09:02.329 00:09:02.329 ' 00:09:02.329 14:08:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:02.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.329 --rc genhtml_branch_coverage=1 00:09:02.329 --rc genhtml_function_coverage=1 00:09:02.329 --rc genhtml_legend=1 00:09:02.329 --rc geninfo_all_blocks=1 00:09:02.329 --rc geninfo_unexecuted_blocks=1 00:09:02.329 00:09:02.329 ' 00:09:02.329 14:08:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:02.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.329 --rc genhtml_branch_coverage=1 00:09:02.329 --rc genhtml_function_coverage=1 00:09:02.329 --rc genhtml_legend=1 00:09:02.329 --rc geninfo_all_blocks=1 00:09:02.329 --rc geninfo_unexecuted_blocks=1 00:09:02.329 00:09:02.329 ' 00:09:02.329 14:08:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:02.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.329 --rc genhtml_branch_coverage=1 00:09:02.329 --rc genhtml_function_coverage=1 00:09:02.329 --rc genhtml_legend=1 00:09:02.329 --rc geninfo_all_blocks=1 00:09:02.329 --rc geninfo_unexecuted_blocks=1 00:09:02.329 00:09:02.329 ' 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b99a9277-5741-41d9-98a0-55197f077e50 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b99a9277-5741-41d9-98a0-55197f077e50 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:02.329 14:08:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.329 14:08:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.329 14:08:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.329 14:08:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.329 14:08:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.329 14:08:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.329 14:08:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.329 14:08:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:02.329 14:08:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.329 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.329 14:08:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:02.329 INFO: launching applications... 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:02.329 14:08:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:02.329 14:08:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:02.329 14:08:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:02.329 14:08:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:02.329 14:08:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:02.329 14:08:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:02.329 14:08:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:02.329 14:08:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:02.329 14:08:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57815 00:09:02.329 14:08:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:02.329 Waiting for target to run... 00:09:02.329 14:08:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:02.329 14:08:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57815 /var/tmp/spdk_tgt.sock 00:09:02.329 14:08:33 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57815 ']' 00:09:02.329 14:08:33 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:02.329 14:08:33 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:02.329 14:08:33 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:02.329 14:08:33 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.329 14:08:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:02.589 [2024-11-27 14:08:33.292460] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:02.589 [2024-11-27 14:08:33.292709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57815 ] 00:09:02.849 [2024-11-27 14:08:33.703619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.108 [2024-11-27 14:08:33.827453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.677 14:08:34 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.678 00:09:03.678 14:08:34 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:03.678 14:08:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:03.678 INFO: shutting down applications... 00:09:03.678 14:08:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:03.678 14:08:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:03.678 14:08:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:03.678 14:08:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:03.678 14:08:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57815 ]] 00:09:03.678 14:08:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57815 00:09:03.678 14:08:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:03.678 14:08:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:03.678 14:08:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:09:03.678 14:08:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:04.247 14:08:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:04.247 14:08:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:04.247 14:08:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:09:04.247 14:08:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:04.817 14:08:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:04.817 14:08:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:04.817 14:08:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:09:04.817 14:08:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:05.387 14:08:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:05.387 14:08:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:05.387 14:08:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:09:05.387 14:08:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:05.955 14:08:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:05.955 14:08:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:05.955 14:08:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:09:05.955 14:08:36 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:06.214 14:08:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:06.214 14:08:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:06.214 14:08:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:09:06.214 14:08:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:06.783 14:08:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:06.783 14:08:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:06.783 14:08:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57815 00:09:06.783 14:08:37 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:06.783 14:08:37 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:06.783 14:08:37 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:06.783 SPDK target shutdown done 00:09:06.783 14:08:37 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:06.783 Success 00:09:06.783 14:08:37 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:06.783 00:09:06.783 real 0m4.667s 00:09:06.783 user 0m4.241s 00:09:06.783 sys 0m0.555s 00:09:06.783 14:08:37 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.783 14:08:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:06.783 ************************************ 00:09:06.783 END TEST json_config_extra_key 00:09:06.783 ************************************ 00:09:06.783 14:08:37 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:06.783 14:08:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.783 14:08:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.783 14:08:37 -- common/autotest_common.sh@10 -- # set +x 00:09:06.783 ************************************ 00:09:06.783 START TEST alias_rpc 00:09:06.783 ************************************ 00:09:06.784 14:08:37 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:07.044 * Looking for test storage... 00:09:07.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.044 14:08:37 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:07.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.044 --rc genhtml_branch_coverage=1 00:09:07.044 --rc genhtml_function_coverage=1 00:09:07.044 --rc genhtml_legend=1 00:09:07.044 --rc geninfo_all_blocks=1 00:09:07.044 --rc geninfo_unexecuted_blocks=1 00:09:07.044 00:09:07.044 ' 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:07.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.044 --rc genhtml_branch_coverage=1 00:09:07.044 --rc genhtml_function_coverage=1 00:09:07.044 --rc genhtml_legend=1 00:09:07.044 --rc geninfo_all_blocks=1 00:09:07.044 --rc geninfo_unexecuted_blocks=1 00:09:07.044 00:09:07.044 ' 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:07.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.044 --rc genhtml_branch_coverage=1 00:09:07.044 --rc genhtml_function_coverage=1 00:09:07.044 --rc genhtml_legend=1 00:09:07.044 --rc geninfo_all_blocks=1 00:09:07.044 --rc geninfo_unexecuted_blocks=1 00:09:07.044 00:09:07.044 ' 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:07.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.044 --rc genhtml_branch_coverage=1 00:09:07.044 --rc genhtml_function_coverage=1 00:09:07.044 --rc genhtml_legend=1 00:09:07.044 --rc geninfo_all_blocks=1 00:09:07.044 --rc geninfo_unexecuted_blocks=1 00:09:07.044 00:09:07.044 ' 00:09:07.044 14:08:37 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:07.044 14:08:37 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57928 00:09:07.044 14:08:37 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:07.044 14:08:37 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57928 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57928 ']' 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.044 14:08:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.304 [2024-11-27 14:08:38.007062] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:07.304 [2024-11-27 14:08:38.007655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57928 ] 00:09:07.304 [2024-11-27 14:08:38.187013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.563 [2024-11-27 14:08:38.306239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.520 14:08:39 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.520 14:08:39 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:08.520 14:08:39 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:08.520 14:08:39 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57928 00:09:08.520 14:08:39 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57928 ']' 00:09:08.520 14:08:39 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57928 00:09:08.520 14:08:39 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:08.779 14:08:39 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.779 14:08:39 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57928 00:09:08.779 14:08:39 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.779 14:08:39 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.779 14:08:39 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57928' 00:09:08.779 killing process with pid 57928 00:09:08.779 14:08:39 alias_rpc -- common/autotest_common.sh@973 -- # kill 57928 00:09:08.779 14:08:39 alias_rpc -- common/autotest_common.sh@978 -- # wait 57928 00:09:11.315 00:09:11.315 real 0m4.301s 00:09:11.315 user 0m4.381s 00:09:11.315 sys 0m0.562s 00:09:11.315 14:08:41 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.315 14:08:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.315 ************************************ 00:09:11.315 END TEST alias_rpc 00:09:11.315 ************************************ 00:09:11.316 14:08:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:11.316 14:08:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:11.316 14:08:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.316 14:08:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.316 14:08:42 -- common/autotest_common.sh@10 -- # set +x 00:09:11.316 ************************************ 00:09:11.316 START TEST spdkcli_tcp 00:09:11.316 ************************************ 00:09:11.316 14:08:42 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:11.316 * Looking for test storage... 00:09:11.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:11.316 14:08:42 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.316 14:08:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.316 14:08:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.575 14:08:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.575 14:08:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:11.575 14:08:42 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.575 14:08:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.575 --rc genhtml_branch_coverage=1 00:09:11.575 --rc genhtml_function_coverage=1 00:09:11.575 --rc genhtml_legend=1 00:09:11.575 --rc geninfo_all_blocks=1 00:09:11.575 --rc geninfo_unexecuted_blocks=1 00:09:11.575 00:09:11.575 ' 00:09:11.575 14:08:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.575 --rc genhtml_branch_coverage=1 00:09:11.575 --rc genhtml_function_coverage=1 00:09:11.575 --rc genhtml_legend=1 00:09:11.575 --rc geninfo_all_blocks=1 00:09:11.575 --rc geninfo_unexecuted_blocks=1 00:09:11.575 00:09:11.575 ' 00:09:11.575 14:08:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.575 --rc genhtml_branch_coverage=1 00:09:11.575 --rc genhtml_function_coverage=1 00:09:11.575 --rc genhtml_legend=1 00:09:11.575 --rc geninfo_all_blocks=1 00:09:11.575 --rc geninfo_unexecuted_blocks=1 00:09:11.575 00:09:11.575 ' 00:09:11.575 14:08:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.575 --rc genhtml_branch_coverage=1 00:09:11.575 --rc genhtml_function_coverage=1 00:09:11.575 --rc genhtml_legend=1 00:09:11.575 --rc geninfo_all_blocks=1 00:09:11.575 --rc geninfo_unexecuted_blocks=1 00:09:11.575 00:09:11.575 ' 00:09:11.575 14:08:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:11.575 14:08:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:11.575 14:08:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:11.575 14:08:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:11.575 14:08:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:11.575 14:08:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:11.575 14:08:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:11.575 14:08:42 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.575 14:08:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:11.575 14:08:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58035 00:09:11.575 14:08:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:11.575 14:08:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58035 00:09:11.576 14:08:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58035 ']' 00:09:11.576 14:08:42 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.576 14:08:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.576 14:08:42 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.576 14:08:42 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.576 14:08:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:11.576 [2024-11-27 14:08:42.411902] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:11.576 [2024-11-27 14:08:42.412058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58035 ] 00:09:11.835 [2024-11-27 14:08:42.585429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:11.835 [2024-11-27 14:08:42.710803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.835 [2024-11-27 14:08:42.710837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.773 14:08:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.773 14:08:43 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:12.773 14:08:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:12.773 14:08:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58057 00:09:12.773 14:08:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:13.033 [ 00:09:13.033 "bdev_malloc_delete", 00:09:13.033 "bdev_malloc_create", 00:09:13.033 "bdev_null_resize", 00:09:13.033 "bdev_null_delete", 00:09:13.033 "bdev_null_create", 00:09:13.033 "bdev_nvme_cuse_unregister", 00:09:13.033 "bdev_nvme_cuse_register", 00:09:13.033 "bdev_opal_new_user", 00:09:13.033 "bdev_opal_set_lock_state", 00:09:13.033 "bdev_opal_delete", 00:09:13.033 "bdev_opal_get_info", 00:09:13.033 "bdev_opal_create", 00:09:13.033 "bdev_nvme_opal_revert", 00:09:13.033 "bdev_nvme_opal_init", 00:09:13.033 "bdev_nvme_send_cmd", 00:09:13.033 "bdev_nvme_set_keys", 00:09:13.033 "bdev_nvme_get_path_iostat", 00:09:13.033 "bdev_nvme_get_mdns_discovery_info", 00:09:13.033 "bdev_nvme_stop_mdns_discovery", 00:09:13.033 "bdev_nvme_start_mdns_discovery", 00:09:13.033 "bdev_nvme_set_multipath_policy", 00:09:13.033 "bdev_nvme_set_preferred_path", 00:09:13.033 "bdev_nvme_get_io_paths", 00:09:13.033 "bdev_nvme_remove_error_injection", 00:09:13.033 "bdev_nvme_add_error_injection", 00:09:13.033 "bdev_nvme_get_discovery_info", 00:09:13.033 "bdev_nvme_stop_discovery", 00:09:13.033 "bdev_nvme_start_discovery", 00:09:13.033 "bdev_nvme_get_controller_health_info", 00:09:13.033 "bdev_nvme_disable_controller", 00:09:13.033 "bdev_nvme_enable_controller", 00:09:13.033 "bdev_nvme_reset_controller", 00:09:13.033 "bdev_nvme_get_transport_statistics", 00:09:13.033 "bdev_nvme_apply_firmware", 00:09:13.033 "bdev_nvme_detach_controller", 00:09:13.033 "bdev_nvme_get_controllers", 00:09:13.033 "bdev_nvme_attach_controller", 00:09:13.033 "bdev_nvme_set_hotplug", 00:09:13.033 "bdev_nvme_set_options", 00:09:13.033 "bdev_passthru_delete", 00:09:13.033 "bdev_passthru_create", 00:09:13.033 "bdev_lvol_set_parent_bdev", 00:09:13.034 "bdev_lvol_set_parent", 00:09:13.034 "bdev_lvol_check_shallow_copy", 00:09:13.034 "bdev_lvol_start_shallow_copy", 00:09:13.034 "bdev_lvol_grow_lvstore", 00:09:13.034 "bdev_lvol_get_lvols", 00:09:13.034 "bdev_lvol_get_lvstores", 00:09:13.034 "bdev_lvol_delete", 00:09:13.034 "bdev_lvol_set_read_only", 00:09:13.034 "bdev_lvol_resize", 00:09:13.034 "bdev_lvol_decouple_parent", 00:09:13.034 "bdev_lvol_inflate", 00:09:13.034 "bdev_lvol_rename", 00:09:13.034 "bdev_lvol_clone_bdev", 00:09:13.034 "bdev_lvol_clone", 00:09:13.034 "bdev_lvol_snapshot", 00:09:13.034 "bdev_lvol_create", 00:09:13.034 "bdev_lvol_delete_lvstore", 00:09:13.034 "bdev_lvol_rename_lvstore", 00:09:13.034 "bdev_lvol_create_lvstore", 00:09:13.034 "bdev_raid_set_options", 00:09:13.034 "bdev_raid_remove_base_bdev", 00:09:13.034 "bdev_raid_add_base_bdev", 00:09:13.034 "bdev_raid_delete", 00:09:13.034 "bdev_raid_create", 00:09:13.034 "bdev_raid_get_bdevs", 00:09:13.034 "bdev_error_inject_error", 00:09:13.034 "bdev_error_delete", 00:09:13.034 "bdev_error_create", 00:09:13.034 "bdev_split_delete", 00:09:13.034 "bdev_split_create", 00:09:13.034 "bdev_delay_delete", 00:09:13.034 "bdev_delay_create", 00:09:13.034 "bdev_delay_update_latency", 00:09:13.034 "bdev_zone_block_delete", 00:09:13.034 "bdev_zone_block_create", 00:09:13.034 "blobfs_create", 00:09:13.034 "blobfs_detect", 00:09:13.034 "blobfs_set_cache_size", 00:09:13.034 "bdev_aio_delete", 00:09:13.034 "bdev_aio_rescan", 00:09:13.034 "bdev_aio_create", 00:09:13.034 "bdev_ftl_set_property", 00:09:13.034 "bdev_ftl_get_properties", 00:09:13.034 "bdev_ftl_get_stats", 00:09:13.034 "bdev_ftl_unmap", 00:09:13.034 "bdev_ftl_unload", 00:09:13.034 "bdev_ftl_delete", 00:09:13.034 "bdev_ftl_load", 00:09:13.034 "bdev_ftl_create", 00:09:13.034 "bdev_virtio_attach_controller", 00:09:13.034 "bdev_virtio_scsi_get_devices", 00:09:13.034 "bdev_virtio_detach_controller", 00:09:13.034 "bdev_virtio_blk_set_hotplug", 00:09:13.034 "bdev_iscsi_delete", 00:09:13.034 "bdev_iscsi_create", 00:09:13.034 "bdev_iscsi_set_options", 00:09:13.034 "accel_error_inject_error", 00:09:13.034 "ioat_scan_accel_module", 00:09:13.034 "dsa_scan_accel_module", 00:09:13.034 "iaa_scan_accel_module", 00:09:13.034 "keyring_file_remove_key", 00:09:13.034 "keyring_file_add_key", 00:09:13.034 "keyring_linux_set_options", 00:09:13.034 "fsdev_aio_delete", 00:09:13.034 "fsdev_aio_create", 00:09:13.034 "iscsi_get_histogram", 00:09:13.034 "iscsi_enable_histogram", 00:09:13.034 "iscsi_set_options", 00:09:13.034 "iscsi_get_auth_groups", 00:09:13.034 "iscsi_auth_group_remove_secret", 00:09:13.034 "iscsi_auth_group_add_secret", 00:09:13.034 "iscsi_delete_auth_group", 00:09:13.034 "iscsi_create_auth_group", 00:09:13.034 "iscsi_set_discovery_auth", 00:09:13.034 "iscsi_get_options", 00:09:13.034 "iscsi_target_node_request_logout", 00:09:13.034 "iscsi_target_node_set_redirect", 00:09:13.034 "iscsi_target_node_set_auth", 00:09:13.034 "iscsi_target_node_add_lun", 00:09:13.034 "iscsi_get_stats", 00:09:13.034 "iscsi_get_connections", 00:09:13.034 "iscsi_portal_group_set_auth", 00:09:13.034 "iscsi_start_portal_group", 00:09:13.034 "iscsi_delete_portal_group", 00:09:13.034 "iscsi_create_portal_group", 00:09:13.034 "iscsi_get_portal_groups", 00:09:13.034 "iscsi_delete_target_node", 00:09:13.034 "iscsi_target_node_remove_pg_ig_maps", 00:09:13.034 "iscsi_target_node_add_pg_ig_maps", 00:09:13.034 "iscsi_create_target_node", 00:09:13.034 "iscsi_get_target_nodes", 00:09:13.034 "iscsi_delete_initiator_group", 00:09:13.034 "iscsi_initiator_group_remove_initiators", 00:09:13.034 "iscsi_initiator_group_add_initiators", 00:09:13.034 "iscsi_create_initiator_group", 00:09:13.034 "iscsi_get_initiator_groups", 00:09:13.034 "nvmf_set_crdt", 00:09:13.034 "nvmf_set_config", 00:09:13.034 "nvmf_set_max_subsystems", 00:09:13.034 "nvmf_stop_mdns_prr", 00:09:13.034 "nvmf_publish_mdns_prr", 00:09:13.034 "nvmf_subsystem_get_listeners", 00:09:13.034 "nvmf_subsystem_get_qpairs", 00:09:13.034 "nvmf_subsystem_get_controllers", 00:09:13.034 "nvmf_get_stats", 00:09:13.034 "nvmf_get_transports", 00:09:13.034 "nvmf_create_transport", 00:09:13.034 "nvmf_get_targets", 00:09:13.034 "nvmf_delete_target", 00:09:13.034 "nvmf_create_target", 00:09:13.034 "nvmf_subsystem_allow_any_host", 00:09:13.034 "nvmf_subsystem_set_keys", 00:09:13.034 "nvmf_subsystem_remove_host", 00:09:13.034 "nvmf_subsystem_add_host", 00:09:13.034 "nvmf_ns_remove_host", 00:09:13.034 "nvmf_ns_add_host", 00:09:13.034 "nvmf_subsystem_remove_ns", 00:09:13.034 "nvmf_subsystem_set_ns_ana_group", 00:09:13.034 "nvmf_subsystem_add_ns", 00:09:13.034 "nvmf_subsystem_listener_set_ana_state", 00:09:13.034 "nvmf_discovery_get_referrals", 00:09:13.034 "nvmf_discovery_remove_referral", 00:09:13.034 "nvmf_discovery_add_referral", 00:09:13.034 "nvmf_subsystem_remove_listener", 00:09:13.034 "nvmf_subsystem_add_listener", 00:09:13.034 "nvmf_delete_subsystem", 00:09:13.034 "nvmf_create_subsystem", 00:09:13.034 "nvmf_get_subsystems", 00:09:13.034 "env_dpdk_get_mem_stats", 00:09:13.034 "nbd_get_disks", 00:09:13.034 "nbd_stop_disk", 00:09:13.034 "nbd_start_disk", 00:09:13.034 "ublk_recover_disk", 00:09:13.034 "ublk_get_disks", 00:09:13.034 "ublk_stop_disk", 00:09:13.034 "ublk_start_disk", 00:09:13.034 "ublk_destroy_target", 00:09:13.034 "ublk_create_target", 00:09:13.034 "virtio_blk_create_transport", 00:09:13.034 "virtio_blk_get_transports", 00:09:13.034 "vhost_controller_set_coalescing", 00:09:13.034 "vhost_get_controllers", 00:09:13.034 "vhost_delete_controller", 00:09:13.034 "vhost_create_blk_controller", 00:09:13.034 "vhost_scsi_controller_remove_target", 00:09:13.034 "vhost_scsi_controller_add_target", 00:09:13.034 "vhost_start_scsi_controller", 00:09:13.034 "vhost_create_scsi_controller", 00:09:13.034 "thread_set_cpumask", 00:09:13.034 "scheduler_set_options", 00:09:13.034 "framework_get_governor", 00:09:13.034 "framework_get_scheduler", 00:09:13.034 "framework_set_scheduler", 00:09:13.034 "framework_get_reactors", 00:09:13.034 "thread_get_io_channels", 00:09:13.034 "thread_get_pollers", 00:09:13.034 "thread_get_stats", 00:09:13.034 "framework_monitor_context_switch", 00:09:13.034 "spdk_kill_instance", 00:09:13.034 "log_enable_timestamps", 00:09:13.034 "log_get_flags", 00:09:13.034 "log_clear_flag", 00:09:13.034 "log_set_flag", 00:09:13.034 "log_get_level", 00:09:13.034 "log_set_level", 00:09:13.034 "log_get_print_level", 00:09:13.034 "log_set_print_level", 00:09:13.034 "framework_enable_cpumask_locks", 00:09:13.034 "framework_disable_cpumask_locks", 00:09:13.034 "framework_wait_init", 00:09:13.034 "framework_start_init", 00:09:13.034 "scsi_get_devices", 00:09:13.034 "bdev_get_histogram", 00:09:13.034 "bdev_enable_histogram", 00:09:13.034 "bdev_set_qos_limit", 00:09:13.034 "bdev_set_qd_sampling_period", 00:09:13.034 "bdev_get_bdevs", 00:09:13.034 "bdev_reset_iostat", 00:09:13.034 "bdev_get_iostat", 00:09:13.034 "bdev_examine", 00:09:13.034 "bdev_wait_for_examine", 00:09:13.034 "bdev_set_options", 00:09:13.034 "accel_get_stats", 00:09:13.034 "accel_set_options", 00:09:13.034 "accel_set_driver", 00:09:13.034 "accel_crypto_key_destroy", 00:09:13.034 "accel_crypto_keys_get", 00:09:13.034 "accel_crypto_key_create", 00:09:13.034 "accel_assign_opc", 00:09:13.034 "accel_get_module_info", 00:09:13.034 "accel_get_opc_assignments", 00:09:13.034 "vmd_rescan", 00:09:13.034 "vmd_remove_device", 00:09:13.034 "vmd_enable", 00:09:13.034 "sock_get_default_impl", 00:09:13.034 "sock_set_default_impl", 00:09:13.034 "sock_impl_set_options", 00:09:13.034 "sock_impl_get_options", 00:09:13.034 "iobuf_get_stats", 00:09:13.034 "iobuf_set_options", 00:09:13.034 "keyring_get_keys", 00:09:13.034 "framework_get_pci_devices", 00:09:13.034 "framework_get_config", 00:09:13.034 "framework_get_subsystems", 00:09:13.034 "fsdev_set_opts", 00:09:13.034 "fsdev_get_opts", 00:09:13.034 "trace_get_info", 00:09:13.034 "trace_get_tpoint_group_mask", 00:09:13.034 "trace_disable_tpoint_group", 00:09:13.034 "trace_enable_tpoint_group", 00:09:13.034 "trace_clear_tpoint_mask", 00:09:13.034 "trace_set_tpoint_mask", 00:09:13.034 "notify_get_notifications", 00:09:13.034 "notify_get_types", 00:09:13.034 "spdk_get_version", 00:09:13.034 "rpc_get_methods" 00:09:13.034 ] 00:09:13.034 14:08:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.034 14:08:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:13.034 14:08:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58035 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58035 ']' 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58035 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58035 00:09:13.034 killing process with pid 58035 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58035' 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58035 00:09:13.034 14:08:43 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58035 00:09:15.572 00:09:15.572 real 0m4.405s 00:09:15.572 user 0m7.919s 00:09:15.572 sys 0m0.672s 00:09:15.572 14:08:46 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.572 14:08:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:15.572 ************************************ 00:09:15.572 END TEST spdkcli_tcp 00:09:15.572 ************************************ 00:09:15.572 14:08:46 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:15.572 14:08:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.572 14:08:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.572 14:08:46 -- common/autotest_common.sh@10 -- # set +x 00:09:15.853 ************************************ 00:09:15.853 START TEST dpdk_mem_utility 00:09:15.853 ************************************ 00:09:15.853 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:15.853 * Looking for test storage... 00:09:15.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:15.853 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.853 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.853 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.853 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:15.853 14:08:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.854 14:08:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:15.854 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.854 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.854 --rc genhtml_branch_coverage=1 00:09:15.854 --rc genhtml_function_coverage=1 00:09:15.854 --rc genhtml_legend=1 00:09:15.854 --rc geninfo_all_blocks=1 00:09:15.854 --rc geninfo_unexecuted_blocks=1 00:09:15.854 00:09:15.854 ' 00:09:15.854 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.854 --rc genhtml_branch_coverage=1 00:09:15.854 --rc genhtml_function_coverage=1 00:09:15.854 --rc genhtml_legend=1 00:09:15.854 --rc geninfo_all_blocks=1 00:09:15.854 --rc geninfo_unexecuted_blocks=1 00:09:15.854 00:09:15.854 ' 00:09:15.854 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.854 --rc genhtml_branch_coverage=1 00:09:15.854 --rc genhtml_function_coverage=1 00:09:15.854 --rc genhtml_legend=1 00:09:15.854 --rc geninfo_all_blocks=1 00:09:15.854 --rc geninfo_unexecuted_blocks=1 00:09:15.854 00:09:15.854 ' 00:09:15.854 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.854 --rc genhtml_branch_coverage=1 00:09:15.854 --rc genhtml_function_coverage=1 00:09:15.854 --rc genhtml_legend=1 00:09:15.854 --rc geninfo_all_blocks=1 00:09:15.854 --rc geninfo_unexecuted_blocks=1 00:09:15.854 00:09:15.854 ' 00:09:15.854 14:08:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:15.854 14:08:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58162 00:09:15.854 14:08:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:15.854 14:08:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58162 00:09:15.854 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58162 ']' 00:09:15.854 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.854 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.854 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.854 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.854 14:08:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:16.113 [2024-11-27 14:08:46.859189] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:16.113 [2024-11-27 14:08:46.859339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58162 ] 00:09:16.113 [2024-11-27 14:08:47.030500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.371 [2024-11-27 14:08:47.150007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.312 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.312 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:17.312 14:08:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:17.312 14:08:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:17.312 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.312 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:17.312 { 00:09:17.312 "filename": "/tmp/spdk_mem_dump.txt" 00:09:17.312 } 00:09:17.312 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.312 14:08:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:17.312 DPDK memory size 824.000000 MiB in 1 heap(s) 00:09:17.312 1 heaps totaling size 824.000000 MiB 00:09:17.312 size: 824.000000 MiB heap id: 0 00:09:17.312 end heaps---------- 00:09:17.312 9 mempools totaling size 603.782043 MiB 00:09:17.312 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:17.312 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:17.312 size: 100.555481 MiB name: bdev_io_58162 00:09:17.312 size: 50.003479 MiB name: msgpool_58162 00:09:17.312 size: 36.509338 MiB name: fsdev_io_58162 00:09:17.312 size: 21.763794 MiB name: PDU_Pool 00:09:17.312 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:17.312 size: 4.133484 MiB name: evtpool_58162 00:09:17.312 size: 0.026123 MiB name: Session_Pool 00:09:17.312 end mempools------- 00:09:17.312 6 memzones totaling size 4.142822 MiB 00:09:17.312 size: 1.000366 MiB name: RG_ring_0_58162 00:09:17.312 size: 1.000366 MiB name: RG_ring_1_58162 00:09:17.312 size: 1.000366 MiB name: RG_ring_4_58162 00:09:17.312 size: 1.000366 MiB name: RG_ring_5_58162 00:09:17.312 size: 0.125366 MiB name: RG_ring_2_58162 00:09:17.312 size: 0.015991 MiB name: RG_ring_3_58162 00:09:17.312 end memzones------- 00:09:17.312 14:08:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:17.312 heap id: 0 total size: 824.000000 MiB number of busy elements: 326 number of free elements: 18 00:09:17.312 list of free elements. size: 16.778687 MiB 00:09:17.312 element at address: 0x200006400000 with size: 1.995972 MiB 00:09:17.312 element at address: 0x20000a600000 with size: 1.995972 MiB 00:09:17.312 element at address: 0x200003e00000 with size: 1.991028 MiB 00:09:17.312 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:17.312 element at address: 0x200019900040 with size: 0.999939 MiB 00:09:17.312 element at address: 0x200019a00000 with size: 0.999084 MiB 00:09:17.312 element at address: 0x200032600000 with size: 0.994324 MiB 00:09:17.312 element at address: 0x200000400000 with size: 0.992004 MiB 00:09:17.312 element at address: 0x200019200000 with size: 0.959656 MiB 00:09:17.312 element at address: 0x200019d00040 with size: 0.936401 MiB 00:09:17.312 element at address: 0x200000200000 with size: 0.716980 MiB 00:09:17.312 element at address: 0x20001b400000 with size: 0.559998 MiB 00:09:17.312 element at address: 0x200000c00000 with size: 0.489197 MiB 00:09:17.312 element at address: 0x200019600000 with size: 0.487976 MiB 00:09:17.312 element at address: 0x200019e00000 with size: 0.485413 MiB 00:09:17.312 element at address: 0x200012c00000 with size: 0.433472 MiB 00:09:17.312 element at address: 0x200028800000 with size: 0.390442 MiB 00:09:17.312 element at address: 0x200000800000 with size: 0.350891 MiB 00:09:17.312 list of standard malloc elements. size: 199.290405 MiB 00:09:17.312 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:09:17.312 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:09:17.312 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:17.312 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:17.312 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:09:17.312 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:17.312 element at address: 0x200019deff40 with size: 0.062683 MiB 00:09:17.312 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:17.312 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:09:17.312 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:09:17.312 element at address: 0x200012bff040 with size: 0.000305 MiB 00:09:17.312 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:17.312 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:17.312 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:09:17.312 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:09:17.312 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:09:17.312 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:09:17.312 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:09:17.312 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:09:17.312 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:09:17.312 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:09:17.312 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:09:17.312 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200000cff000 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bff180 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bff280 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bff380 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bff480 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bff580 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bff680 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bff780 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bff880 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bff980 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200019affc40 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:09:17.313 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:09:17.314 element at address: 0x200028863f40 with size: 0.000244 MiB 00:09:17.314 element at address: 0x200028864040 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886af80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886b080 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886b180 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886b280 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886b380 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886b480 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886b580 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886b680 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886b780 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886b880 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886b980 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886be80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886c080 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886c180 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886c280 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886c380 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886c480 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886c580 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886c680 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886c780 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886c880 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886c980 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886d080 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886d180 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886d280 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886d380 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886d480 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886d580 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886d680 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886d780 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886d880 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886d980 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886da80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886db80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886de80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886df80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886e080 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886e180 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886e280 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886e380 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886e480 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886e580 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886e680 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886e780 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886e880 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886e980 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886f080 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886f180 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886f280 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886f380 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886f480 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886f580 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886f680 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886f780 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886f880 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886f980 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:09:17.314 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:09:17.314 list of memzone associated elements. size: 607.930908 MiB 00:09:17.314 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:09:17.314 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:17.314 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:09:17.314 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:17.314 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:09:17.314 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58162_0 00:09:17.315 element at address: 0x200000dff340 with size: 48.003113 MiB 00:09:17.315 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58162_0 00:09:17.315 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:09:17.315 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58162_0 00:09:17.315 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:09:17.315 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:17.315 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:09:17.315 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:17.315 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:09:17.315 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58162_0 00:09:17.315 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:09:17.315 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58162 00:09:17.315 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:17.315 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58162 00:09:17.315 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:09:17.315 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:17.315 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:09:17.315 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:17.315 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:17.315 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:17.315 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:09:17.315 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:17.315 element at address: 0x200000cff100 with size: 1.000549 MiB 00:09:17.315 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58162 00:09:17.315 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:09:17.315 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58162 00:09:17.315 element at address: 0x200019affd40 with size: 1.000549 MiB 00:09:17.315 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58162 00:09:17.315 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:09:17.315 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58162 00:09:17.315 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:09:17.315 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58162 00:09:17.315 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:09:17.315 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58162 00:09:17.315 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:09:17.315 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:17.315 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:09:17.315 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:17.315 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:09:17.315 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:17.315 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:09:17.315 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58162 00:09:17.315 element at address: 0x20000085df80 with size: 0.125549 MiB 00:09:17.315 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58162 00:09:17.315 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:09:17.315 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:17.315 element at address: 0x200028864140 with size: 0.023804 MiB 00:09:17.315 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:17.315 element at address: 0x200000859d40 with size: 0.016174 MiB 00:09:17.315 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58162 00:09:17.315 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:09:17.315 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:17.315 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:09:17.315 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58162 00:09:17.315 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:09:17.315 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58162 00:09:17.315 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:09:17.315 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58162 00:09:17.315 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:09:17.315 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:17.315 14:08:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:17.315 14:08:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58162 00:09:17.315 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58162 ']' 00:09:17.315 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58162 00:09:17.315 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:17.315 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.315 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58162 00:09:17.315 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.315 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.315 killing process with pid 58162 00:09:17.315 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58162' 00:09:17.315 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58162 00:09:17.315 14:08:48 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58162 00:09:19.961 00:09:19.961 real 0m4.216s 00:09:19.961 user 0m4.138s 00:09:19.961 sys 0m0.592s 00:09:19.961 14:08:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.961 ************************************ 00:09:19.961 END TEST dpdk_mem_utility 00:09:19.961 14:08:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:19.961 ************************************ 00:09:19.961 14:08:50 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:19.961 14:08:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.961 14:08:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.961 14:08:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.961 ************************************ 00:09:19.961 START TEST event 00:09:19.961 ************************************ 00:09:19.961 14:08:50 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:20.220 * Looking for test storage... 00:09:20.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:20.221 14:08:50 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.221 14:08:50 event -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.221 14:08:50 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.221 14:08:51 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.221 14:08:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.221 14:08:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.221 14:08:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.221 14:08:51 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.221 14:08:51 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.221 14:08:51 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.221 14:08:51 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.221 14:08:51 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.221 14:08:51 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.221 14:08:51 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.221 14:08:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.221 14:08:51 event -- scripts/common.sh@344 -- # case "$op" in 00:09:20.221 14:08:51 event -- scripts/common.sh@345 -- # : 1 00:09:20.221 14:08:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.221 14:08:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.221 14:08:51 event -- scripts/common.sh@365 -- # decimal 1 00:09:20.221 14:08:51 event -- scripts/common.sh@353 -- # local d=1 00:09:20.221 14:08:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.221 14:08:51 event -- scripts/common.sh@355 -- # echo 1 00:09:20.221 14:08:51 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.221 14:08:51 event -- scripts/common.sh@366 -- # decimal 2 00:09:20.221 14:08:51 event -- scripts/common.sh@353 -- # local d=2 00:09:20.221 14:08:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.221 14:08:51 event -- scripts/common.sh@355 -- # echo 2 00:09:20.221 14:08:51 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.221 14:08:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.221 14:08:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.221 14:08:51 event -- scripts/common.sh@368 -- # return 0 00:09:20.221 14:08:51 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.221 14:08:51 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.221 --rc genhtml_branch_coverage=1 00:09:20.221 --rc genhtml_function_coverage=1 00:09:20.221 --rc genhtml_legend=1 00:09:20.221 --rc geninfo_all_blocks=1 00:09:20.221 --rc geninfo_unexecuted_blocks=1 00:09:20.221 00:09:20.221 ' 00:09:20.221 14:08:51 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.221 --rc genhtml_branch_coverage=1 00:09:20.221 --rc genhtml_function_coverage=1 00:09:20.221 --rc genhtml_legend=1 00:09:20.221 --rc geninfo_all_blocks=1 00:09:20.221 --rc geninfo_unexecuted_blocks=1 00:09:20.221 00:09:20.221 ' 00:09:20.221 14:08:51 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.221 --rc genhtml_branch_coverage=1 00:09:20.221 --rc genhtml_function_coverage=1 00:09:20.221 --rc genhtml_legend=1 00:09:20.221 --rc geninfo_all_blocks=1 00:09:20.221 --rc geninfo_unexecuted_blocks=1 00:09:20.221 00:09:20.221 ' 00:09:20.221 14:08:51 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.221 --rc genhtml_branch_coverage=1 00:09:20.221 --rc genhtml_function_coverage=1 00:09:20.221 --rc genhtml_legend=1 00:09:20.221 --rc geninfo_all_blocks=1 00:09:20.221 --rc geninfo_unexecuted_blocks=1 00:09:20.221 00:09:20.221 ' 00:09:20.221 14:08:51 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:20.221 14:08:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:20.221 14:08:51 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:20.221 14:08:51 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:20.221 14:08:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.221 14:08:51 event -- common/autotest_common.sh@10 -- # set +x 00:09:20.221 ************************************ 00:09:20.221 START TEST event_perf 00:09:20.221 ************************************ 00:09:20.221 14:08:51 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:20.221 Running I/O for 1 seconds...[2024-11-27 14:08:51.095317] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:20.221 [2024-11-27 14:08:51.095449] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58270 ] 00:09:20.479 [2024-11-27 14:08:51.274893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:20.479 Running I/O for 1 seconds...[2024-11-27 14:08:51.410178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.479 [2024-11-27 14:08:51.410332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.479 [2024-11-27 14:08:51.410433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.479 [2024-11-27 14:08:51.410441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.856 00:09:21.856 lcore 0: 196572 00:09:21.856 lcore 1: 196571 00:09:21.856 lcore 2: 196572 00:09:21.856 lcore 3: 196572 00:09:21.856 done. 00:09:21.856 00:09:21.856 real 0m1.609s 00:09:21.856 user 0m4.372s 00:09:21.856 sys 0m0.112s 00:09:21.856 14:08:52 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.856 14:08:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:21.856 ************************************ 00:09:21.856 END TEST event_perf 00:09:21.856 ************************************ 00:09:21.856 14:08:52 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:21.856 14:08:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:21.856 14:08:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.856 14:08:52 event -- common/autotest_common.sh@10 -- # set +x 00:09:21.856 ************************************ 00:09:21.856 START TEST event_reactor 00:09:21.856 ************************************ 00:09:21.856 14:08:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:21.856 [2024-11-27 14:08:52.761638] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:21.856 [2024-11-27 14:08:52.761761] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58310 ] 00:09:22.115 [2024-11-27 14:08:52.937459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.115 [2024-11-27 14:08:53.060746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.498 test_start 00:09:23.498 oneshot 00:09:23.498 tick 100 00:09:23.498 tick 100 00:09:23.498 tick 250 00:09:23.498 tick 100 00:09:23.498 tick 100 00:09:23.498 tick 100 00:09:23.498 tick 250 00:09:23.498 tick 500 00:09:23.498 tick 100 00:09:23.498 tick 100 00:09:23.498 tick 250 00:09:23.498 tick 100 00:09:23.498 tick 100 00:09:23.498 test_end 00:09:23.498 00:09:23.498 real 0m1.565s 00:09:23.498 user 0m1.374s 00:09:23.498 sys 0m0.083s 00:09:23.498 14:08:54 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.498 ************************************ 00:09:23.498 END TEST event_reactor 00:09:23.498 ************************************ 00:09:23.498 14:08:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:23.498 14:08:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:23.498 14:08:54 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:23.498 14:08:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.498 14:08:54 event -- common/autotest_common.sh@10 -- # set +x 00:09:23.498 ************************************ 00:09:23.498 START TEST event_reactor_perf 00:09:23.498 ************************************ 00:09:23.498 14:08:54 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:23.498 [2024-11-27 14:08:54.380223] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:23.498 [2024-11-27 14:08:54.380334] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58352 ] 00:09:23.756 [2024-11-27 14:08:54.555893] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.756 [2024-11-27 14:08:54.677091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.133 test_start 00:09:25.133 test_end 00:09:25.133 Performance: 351636 events per second 00:09:25.133 00:09:25.133 real 0m1.577s 00:09:25.133 user 0m1.373s 00:09:25.133 sys 0m0.095s 00:09:25.133 14:08:55 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.133 14:08:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:25.133 ************************************ 00:09:25.133 END TEST event_reactor_perf 00:09:25.133 ************************************ 00:09:25.133 14:08:55 event -- event/event.sh@49 -- # uname -s 00:09:25.133 14:08:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:25.133 14:08:55 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:25.133 14:08:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.133 14:08:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.133 14:08:55 event -- common/autotest_common.sh@10 -- # set +x 00:09:25.133 ************************************ 00:09:25.133 START TEST event_scheduler 00:09:25.133 ************************************ 00:09:25.133 14:08:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:25.133 * Looking for test storage... 00:09:25.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:25.432 14:08:56 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.432 14:08:56 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.432 14:08:56 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.432 14:08:56 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.432 14:08:56 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.432 14:08:56 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.432 14:08:56 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.432 14:08:56 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.432 14:08:56 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.433 14:08:56 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:25.433 14:08:56 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.433 14:08:56 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.433 --rc genhtml_branch_coverage=1 00:09:25.433 --rc genhtml_function_coverage=1 00:09:25.433 --rc genhtml_legend=1 00:09:25.433 --rc geninfo_all_blocks=1 00:09:25.433 --rc geninfo_unexecuted_blocks=1 00:09:25.433 00:09:25.433 ' 00:09:25.433 14:08:56 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.433 --rc genhtml_branch_coverage=1 00:09:25.433 --rc genhtml_function_coverage=1 00:09:25.433 --rc genhtml_legend=1 00:09:25.433 --rc geninfo_all_blocks=1 00:09:25.433 --rc geninfo_unexecuted_blocks=1 00:09:25.433 00:09:25.433 ' 00:09:25.433 14:08:56 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.433 --rc genhtml_branch_coverage=1 00:09:25.433 --rc genhtml_function_coverage=1 00:09:25.433 --rc genhtml_legend=1 00:09:25.433 --rc geninfo_all_blocks=1 00:09:25.433 --rc geninfo_unexecuted_blocks=1 00:09:25.433 00:09:25.433 ' 00:09:25.433 14:08:56 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.433 --rc genhtml_branch_coverage=1 00:09:25.433 --rc genhtml_function_coverage=1 00:09:25.433 --rc genhtml_legend=1 00:09:25.433 --rc geninfo_all_blocks=1 00:09:25.433 --rc geninfo_unexecuted_blocks=1 00:09:25.433 00:09:25.433 ' 00:09:25.433 14:08:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:25.433 14:08:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58424 00:09:25.433 14:08:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:25.433 14:08:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:25.433 14:08:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58424 00:09:25.433 14:08:56 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58424 ']' 00:09:25.433 14:08:56 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.433 14:08:56 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.433 14:08:56 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.433 14:08:56 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.433 14:08:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:25.433 [2024-11-27 14:08:56.275380] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:25.433 [2024-11-27 14:08:56.275586] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58424 ] 00:09:25.697 [2024-11-27 14:08:56.451986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.697 [2024-11-27 14:08:56.578205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.697 [2024-11-27 14:08:56.578464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.697 [2024-11-27 14:08:56.578430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.697 [2024-11-27 14:08:56.578304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.265 14:08:57 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.265 14:08:57 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:26.265 14:08:57 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:26.265 14:08:57 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.265 14:08:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:26.265 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:26.265 POWER: Cannot set governor of lcore 0 to userspace 00:09:26.265 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:26.265 POWER: Cannot set governor of lcore 0 to performance 00:09:26.265 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:26.265 POWER: Cannot set governor of lcore 0 to userspace 00:09:26.265 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:26.265 POWER: Cannot set governor of lcore 0 to userspace 00:09:26.265 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:26.265 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:26.265 POWER: Unable to set Power Management Environment for lcore 0 00:09:26.265 [2024-11-27 14:08:57.171585] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:09:26.265 [2024-11-27 14:08:57.171648] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:09:26.265 [2024-11-27 14:08:57.171664] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:26.265 [2024-11-27 14:08:57.171688] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:26.265 [2024-11-27 14:08:57.171698] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:26.265 [2024-11-27 14:08:57.171709] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:26.265 14:08:57 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.265 14:08:57 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:26.265 14:08:57 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.265 14:08:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 [2024-11-27 14:08:57.516785] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 ************************************ 00:09:26.834 START TEST scheduler_create_thread 00:09:26.834 ************************************ 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 2 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 3 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 4 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 5 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 6 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 7 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 8 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 9 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 10 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 ************************************ 00:09:26.834 END TEST scheduler_create_thread 00:09:26.834 ************************************ 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.834 00:09:26.834 real 0m0.109s 00:09:26.834 user 0m0.013s 00:09:26.834 sys 0m0.003s 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.834 14:08:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 14:08:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:26.834 14:08:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58424 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58424 ']' 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58424 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58424 00:09:26.834 killing process with pid 58424 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58424' 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58424 00:09:26.834 14:08:57 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58424 00:09:27.402 [2024-11-27 14:08:58.122922] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:28.777 00:09:28.777 real 0m3.387s 00:09:28.777 user 0m5.161s 00:09:28.777 sys 0m0.516s 00:09:28.777 14:08:59 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.777 14:08:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:28.777 ************************************ 00:09:28.777 END TEST event_scheduler 00:09:28.777 ************************************ 00:09:28.777 14:08:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:28.777 14:08:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:28.777 14:08:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.777 14:08:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.777 14:08:59 event -- common/autotest_common.sh@10 -- # set +x 00:09:28.777 ************************************ 00:09:28.777 START TEST app_repeat 00:09:28.777 ************************************ 00:09:28.777 14:08:59 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:28.777 Process app_repeat pid: 58512 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58512 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58512' 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:28.777 spdk_app_start Round 0 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:28.777 14:08:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58512 /var/tmp/spdk-nbd.sock 00:09:28.777 14:08:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58512 ']' 00:09:28.777 14:08:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:28.777 14:08:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.777 14:08:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:28.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:28.777 14:08:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.777 14:08:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:28.777 [2024-11-27 14:08:59.498962] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:28.777 [2024-11-27 14:08:59.499088] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58512 ] 00:09:28.777 [2024-11-27 14:08:59.675699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:29.035 [2024-11-27 14:08:59.798655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.035 [2024-11-27 14:08:59.798685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.603 14:09:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.603 14:09:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:29.603 14:09:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:29.860 Malloc0 00:09:29.860 14:09:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:30.426 Malloc1 00:09:30.426 14:09:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:30.426 /dev/nbd0 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:30.426 1+0 records in 00:09:30.426 1+0 records out 00:09:30.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487767 s, 8.4 MB/s 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:30.426 14:09:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:30.426 14:09:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:30.684 /dev/nbd1 00:09:30.684 14:09:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:30.684 14:09:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:30.684 1+0 records in 00:09:30.684 1+0 records out 00:09:30.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223873 s, 18.3 MB/s 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:30.684 14:09:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:30.684 14:09:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.684 14:09:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:30.684 14:09:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:30.684 14:09:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.684 14:09:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:30.943 14:09:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:30.943 { 00:09:30.943 "nbd_device": "/dev/nbd0", 00:09:30.943 "bdev_name": "Malloc0" 00:09:30.943 }, 00:09:30.943 { 00:09:30.943 "nbd_device": "/dev/nbd1", 00:09:30.943 "bdev_name": "Malloc1" 00:09:30.943 } 00:09:30.943 ]' 00:09:30.943 14:09:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:30.943 14:09:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:30.943 { 00:09:30.943 "nbd_device": "/dev/nbd0", 00:09:30.943 "bdev_name": "Malloc0" 00:09:30.943 }, 00:09:30.943 { 00:09:30.943 "nbd_device": "/dev/nbd1", 00:09:30.943 "bdev_name": "Malloc1" 00:09:30.943 } 00:09:30.943 ]' 00:09:30.943 14:09:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:30.943 /dev/nbd1' 00:09:30.943 14:09:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:30.943 /dev/nbd1' 00:09:30.943 14:09:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:31.203 256+0 records in 00:09:31.203 256+0 records out 00:09:31.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136261 s, 77.0 MB/s 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:31.203 256+0 records in 00:09:31.203 256+0 records out 00:09:31.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215206 s, 48.7 MB/s 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:31.203 256+0 records in 00:09:31.203 256+0 records out 00:09:31.203 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243903 s, 43.0 MB/s 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:31.203 14:09:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:31.462 14:09:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:31.462 14:09:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:31.462 14:09:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:31.462 14:09:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:31.462 14:09:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:31.462 14:09:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:31.462 14:09:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:31.462 14:09:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:31.462 14:09:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:31.462 14:09:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:31.720 14:09:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:31.720 14:09:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:31.720 14:09:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:31.720 14:09:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:31.720 14:09:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:31.720 14:09:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:31.720 14:09:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:31.720 14:09:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:31.720 14:09:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:31.720 14:09:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.720 14:09:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:31.977 14:09:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:31.977 14:09:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:32.569 14:09:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:33.948 [2024-11-27 14:09:04.508451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:33.948 [2024-11-27 14:09:04.632653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.948 [2024-11-27 14:09:04.632654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.948 [2024-11-27 14:09:04.841446] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:33.948 [2024-11-27 14:09:04.841553] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:35.322 spdk_app_start Round 1 00:09:35.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:35.323 14:09:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:35.323 14:09:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:35.323 14:09:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58512 /var/tmp/spdk-nbd.sock 00:09:35.323 14:09:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58512 ']' 00:09:35.323 14:09:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:35.323 14:09:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.323 14:09:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:35.323 14:09:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.323 14:09:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:35.582 14:09:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:35.582 14:09:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:35.582 14:09:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:35.840 Malloc0 00:09:35.840 14:09:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:36.419 Malloc1 00:09:36.419 14:09:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:36.419 14:09:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.419 14:09:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:36.419 14:09:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:36.419 14:09:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:36.419 14:09:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:36.419 14:09:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:36.419 14:09:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.420 14:09:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:36.420 14:09:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:36.420 14:09:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:36.420 14:09:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:36.420 14:09:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:36.420 14:09:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:36.420 14:09:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:36.420 14:09:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:36.420 /dev/nbd0 00:09:36.420 14:09:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:36.420 14:09:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:36.420 1+0 records in 00:09:36.420 1+0 records out 00:09:36.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423159 s, 9.7 MB/s 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:36.420 14:09:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:36.678 14:09:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:36.678 14:09:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:36.678 14:09:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.678 14:09:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:36.678 14:09:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:36.678 /dev/nbd1 00:09:36.936 14:09:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:36.936 14:09:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:36.936 14:09:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:36.936 14:09:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:36.936 14:09:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:36.937 14:09:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:36.937 14:09:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:36.937 14:09:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:36.937 14:09:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:36.937 14:09:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:36.937 14:09:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:36.937 1+0 records in 00:09:36.937 1+0 records out 00:09:36.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387334 s, 10.6 MB/s 00:09:36.937 14:09:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:36.937 14:09:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:36.937 14:09:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:36.937 14:09:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:36.937 14:09:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:36.937 14:09:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:36.937 14:09:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:36.937 14:09:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:36.937 14:09:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:36.937 14:09:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:37.196 14:09:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:37.196 { 00:09:37.196 "nbd_device": "/dev/nbd0", 00:09:37.196 "bdev_name": "Malloc0" 00:09:37.196 }, 00:09:37.196 { 00:09:37.196 "nbd_device": "/dev/nbd1", 00:09:37.196 "bdev_name": "Malloc1" 00:09:37.196 } 00:09:37.196 ]' 00:09:37.196 14:09:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:37.196 { 00:09:37.196 "nbd_device": "/dev/nbd0", 00:09:37.196 "bdev_name": "Malloc0" 00:09:37.196 }, 00:09:37.196 { 00:09:37.196 "nbd_device": "/dev/nbd1", 00:09:37.196 "bdev_name": "Malloc1" 00:09:37.196 } 00:09:37.196 ]' 00:09:37.196 14:09:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:37.196 14:09:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:37.197 /dev/nbd1' 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:37.197 /dev/nbd1' 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:37.197 256+0 records in 00:09:37.197 256+0 records out 00:09:37.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00524525 s, 200 MB/s 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:37.197 256+0 records in 00:09:37.197 256+0 records out 00:09:37.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241438 s, 43.4 MB/s 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:37.197 14:09:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:37.197 256+0 records in 00:09:37.197 256+0 records out 00:09:37.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278472 s, 37.7 MB/s 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.197 14:09:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:37.455 14:09:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:37.455 14:09:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:37.455 14:09:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:37.456 14:09:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:37.456 14:09:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:37.456 14:09:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:37.456 14:09:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:37.456 14:09:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:37.456 14:09:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:37.456 14:09:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:37.715 14:09:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:37.715 14:09:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:37.715 14:09:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:37.715 14:09:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:37.715 14:09:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:37.715 14:09:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:37.715 14:09:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:37.715 14:09:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:37.715 14:09:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:37.715 14:09:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.715 14:09:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:37.972 14:09:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:37.972 14:09:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:38.538 14:09:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:39.970 [2024-11-27 14:09:10.644180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.970 [2024-11-27 14:09:10.770917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.970 [2024-11-27 14:09:10.770941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.229 [2024-11-27 14:09:10.989584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:40.229 [2024-11-27 14:09:10.989697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:41.604 spdk_app_start Round 2 00:09:41.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:41.604 14:09:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:41.604 14:09:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:41.604 14:09:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58512 /var/tmp/spdk-nbd.sock 00:09:41.604 14:09:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58512 ']' 00:09:41.604 14:09:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:41.604 14:09:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.604 14:09:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:41.604 14:09:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.604 14:09:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:41.862 14:09:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.862 14:09:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:41.862 14:09:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:42.120 Malloc0 00:09:42.120 14:09:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:42.378 Malloc1 00:09:42.378 14:09:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.378 14:09:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:42.636 /dev/nbd0 00:09:42.636 14:09:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:42.636 14:09:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:42.636 1+0 records in 00:09:42.636 1+0 records out 00:09:42.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271051 s, 15.1 MB/s 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:42.636 14:09:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:42.636 14:09:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.636 14:09:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.636 14:09:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:42.942 /dev/nbd1 00:09:42.942 14:09:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:42.942 14:09:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:42.942 1+0 records in 00:09:42.942 1+0 records out 00:09:42.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381109 s, 10.7 MB/s 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:42.942 14:09:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:42.942 14:09:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:42.942 14:09:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:42.942 14:09:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:42.942 14:09:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.942 14:09:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:43.200 { 00:09:43.200 "nbd_device": "/dev/nbd0", 00:09:43.200 "bdev_name": "Malloc0" 00:09:43.200 }, 00:09:43.200 { 00:09:43.200 "nbd_device": "/dev/nbd1", 00:09:43.200 "bdev_name": "Malloc1" 00:09:43.200 } 00:09:43.200 ]' 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:43.200 { 00:09:43.200 "nbd_device": "/dev/nbd0", 00:09:43.200 "bdev_name": "Malloc0" 00:09:43.200 }, 00:09:43.200 { 00:09:43.200 "nbd_device": "/dev/nbd1", 00:09:43.200 "bdev_name": "Malloc1" 00:09:43.200 } 00:09:43.200 ]' 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:43.200 /dev/nbd1' 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:43.200 /dev/nbd1' 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:43.200 256+0 records in 00:09:43.200 256+0 records out 00:09:43.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135389 s, 77.4 MB/s 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:43.200 256+0 records in 00:09:43.200 256+0 records out 00:09:43.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242799 s, 43.2 MB/s 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:43.200 256+0 records in 00:09:43.200 256+0 records out 00:09:43.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291206 s, 36.0 MB/s 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.200 14:09:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.458 14:09:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.717 14:09:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:43.975 14:09:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:43.975 14:09:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:44.542 14:09:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:45.920 [2024-11-27 14:09:16.740186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:45.920 [2024-11-27 14:09:16.866811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.920 [2024-11-27 14:09:16.866812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.183 [2024-11-27 14:09:17.086579] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:46.183 [2024-11-27 14:09:17.086707] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:47.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:47.562 14:09:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58512 /var/tmp/spdk-nbd.sock 00:09:47.562 14:09:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58512 ']' 00:09:47.562 14:09:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:47.562 14:09:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.562 14:09:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:47.562 14:09:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.562 14:09:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:47.820 14:09:18 event.app_repeat -- event/event.sh@39 -- # killprocess 58512 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58512 ']' 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58512 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58512 00:09:47.820 killing process with pid 58512 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58512' 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58512 00:09:47.820 14:09:18 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58512 00:09:49.197 spdk_app_start is called in Round 0. 00:09:49.197 Shutdown signal received, stop current app iteration 00:09:49.197 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:09:49.197 spdk_app_start is called in Round 1. 00:09:49.197 Shutdown signal received, stop current app iteration 00:09:49.197 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:09:49.197 spdk_app_start is called in Round 2. 00:09:49.197 Shutdown signal received, stop current app iteration 00:09:49.197 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 reinitialization... 00:09:49.197 spdk_app_start is called in Round 3. 00:09:49.197 Shutdown signal received, stop current app iteration 00:09:49.197 14:09:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:49.197 14:09:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:49.197 00:09:49.197 real 0m20.501s 00:09:49.197 user 0m44.324s 00:09:49.197 sys 0m2.942s 00:09:49.197 14:09:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:49.197 ************************************ 00:09:49.197 END TEST app_repeat 00:09:49.197 ************************************ 00:09:49.197 14:09:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:49.197 14:09:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:49.197 14:09:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:49.197 14:09:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.197 14:09:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.197 14:09:19 event -- common/autotest_common.sh@10 -- # set +x 00:09:49.197 ************************************ 00:09:49.197 START TEST cpu_locks 00:09:49.197 ************************************ 00:09:49.197 14:09:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:49.197 * Looking for test storage... 00:09:49.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:49.197 14:09:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:49.197 14:09:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:49.197 14:09:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:49.457 14:09:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.457 14:09:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:49.457 14:09:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.457 14:09:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:49.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.457 --rc genhtml_branch_coverage=1 00:09:49.457 --rc genhtml_function_coverage=1 00:09:49.457 --rc genhtml_legend=1 00:09:49.457 --rc geninfo_all_blocks=1 00:09:49.457 --rc geninfo_unexecuted_blocks=1 00:09:49.457 00:09:49.457 ' 00:09:49.457 14:09:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:49.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.457 --rc genhtml_branch_coverage=1 00:09:49.457 --rc genhtml_function_coverage=1 00:09:49.457 --rc genhtml_legend=1 00:09:49.457 --rc geninfo_all_blocks=1 00:09:49.457 --rc geninfo_unexecuted_blocks=1 00:09:49.457 00:09:49.457 ' 00:09:49.457 14:09:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:49.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.457 --rc genhtml_branch_coverage=1 00:09:49.457 --rc genhtml_function_coverage=1 00:09:49.457 --rc genhtml_legend=1 00:09:49.457 --rc geninfo_all_blocks=1 00:09:49.457 --rc geninfo_unexecuted_blocks=1 00:09:49.457 00:09:49.457 ' 00:09:49.457 14:09:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:49.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.457 --rc genhtml_branch_coverage=1 00:09:49.457 --rc genhtml_function_coverage=1 00:09:49.457 --rc genhtml_legend=1 00:09:49.457 --rc geninfo_all_blocks=1 00:09:49.457 --rc geninfo_unexecuted_blocks=1 00:09:49.457 00:09:49.457 ' 00:09:49.457 14:09:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:49.457 14:09:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:49.457 14:09:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:49.457 14:09:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:49.457 14:09:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:49.457 14:09:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:49.457 14:09:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.457 ************************************ 00:09:49.457 START TEST default_locks 00:09:49.457 ************************************ 00:09:49.457 14:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:49.457 14:09:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58967 00:09:49.457 14:09:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58967 00:09:49.457 14:09:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:49.457 14:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58967 ']' 00:09:49.457 14:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.457 14:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.457 14:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.457 14:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.457 14:09:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:49.457 [2024-11-27 14:09:20.350711] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:49.457 [2024-11-27 14:09:20.350855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58967 ] 00:09:49.716 [2024-11-27 14:09:20.527053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.716 [2024-11-27 14:09:20.648798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58967 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58967 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58967 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58967 ']' 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58967 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58967 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.098 killing process with pid 58967 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58967' 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58967 00:09:51.098 14:09:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58967 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58967 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58967 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58967 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58967 ']' 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:53.652 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58967) - No such process 00:09:53.652 ERROR: process (pid: 58967) is no longer running 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:53.652 00:09:53.652 real 0m4.261s 00:09:53.652 user 0m4.212s 00:09:53.652 sys 0m0.614s 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.652 14:09:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:53.652 ************************************ 00:09:53.652 END TEST default_locks 00:09:53.652 ************************************ 00:09:53.652 14:09:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:53.652 14:09:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:53.652 14:09:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.652 14:09:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:53.652 ************************************ 00:09:53.652 START TEST default_locks_via_rpc 00:09:53.652 ************************************ 00:09:53.652 14:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:53.652 14:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59046 00:09:53.652 14:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:53.652 14:09:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59046 00:09:53.652 14:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59046 ']' 00:09:53.652 14:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.652 14:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.652 14:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.652 14:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.652 14:09:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.911 [2024-11-27 14:09:24.670856] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:53.911 [2024-11-27 14:09:24.670981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59046 ] 00:09:53.911 [2024-11-27 14:09:24.834262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.171 [2024-11-27 14:09:24.956192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59046 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59046 00:09:55.122 14:09:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:55.689 14:09:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59046 00:09:55.689 14:09:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59046 ']' 00:09:55.689 14:09:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59046 00:09:55.689 14:09:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:55.689 14:09:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.689 14:09:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59046 00:09:55.689 14:09:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.689 14:09:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.689 killing process with pid 59046 00:09:55.689 14:09:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59046' 00:09:55.689 14:09:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59046 00:09:55.689 14:09:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59046 00:09:58.221 00:09:58.221 real 0m4.334s 00:09:58.221 user 0m4.356s 00:09:58.221 sys 0m0.685s 00:09:58.221 14:09:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.221 14:09:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.221 ************************************ 00:09:58.222 END TEST default_locks_via_rpc 00:09:58.222 ************************************ 00:09:58.222 14:09:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:58.222 14:09:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.222 14:09:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.222 14:09:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:58.222 ************************************ 00:09:58.222 START TEST non_locking_app_on_locked_coremask 00:09:58.222 ************************************ 00:09:58.222 14:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:58.222 14:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59121 00:09:58.222 14:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59121 /var/tmp/spdk.sock 00:09:58.222 14:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:58.222 14:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59121 ']' 00:09:58.222 14:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.222 14:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.222 14:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.222 14:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.222 14:09:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:58.222 [2024-11-27 14:09:29.072532] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:58.222 [2024-11-27 14:09:29.072659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59121 ] 00:09:58.481 [2024-11-27 14:09:29.246776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.481 [2024-11-27 14:09:29.372946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.420 14:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.420 14:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:59.420 14:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59143 00:09:59.420 14:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:59.420 14:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59143 /var/tmp/spdk2.sock 00:09:59.420 14:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59143 ']' 00:09:59.420 14:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:59.420 14:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.420 14:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:59.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:59.420 14:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.420 14:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:59.680 [2024-11-27 14:09:30.440850] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:09:59.680 [2024-11-27 14:09:30.441001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59143 ] 00:09:59.680 [2024-11-27 14:09:30.614612] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:59.680 [2024-11-27 14:09:30.614689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.940 [2024-11-27 14:09:30.856030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59121 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59121 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59121 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59121 ']' 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59121 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59121 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.476 killing process with pid 59121 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59121' 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59121 00:10:02.476 14:09:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59121 00:10:07.752 14:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59143 00:10:07.752 14:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59143 ']' 00:10:07.752 14:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59143 00:10:07.752 14:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:07.752 14:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.752 14:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59143 00:10:07.752 14:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.752 14:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.752 killing process with pid 59143 00:10:07.752 14:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59143' 00:10:07.752 14:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59143 00:10:07.752 14:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59143 00:10:10.288 00:10:10.288 real 0m12.078s 00:10:10.288 user 0m12.310s 00:10:10.288 sys 0m1.240s 00:10:10.288 14:09:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.288 14:09:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.288 ************************************ 00:10:10.288 END TEST non_locking_app_on_locked_coremask 00:10:10.288 ************************************ 00:10:10.288 14:09:41 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:10.288 14:09:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.288 14:09:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.288 14:09:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:10.288 ************************************ 00:10:10.288 START TEST locking_app_on_unlocked_coremask 00:10:10.288 ************************************ 00:10:10.288 14:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:10.288 14:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59294 00:10:10.288 14:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:10.288 14:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59294 /var/tmp/spdk.sock 00:10:10.288 14:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59294 ']' 00:10:10.288 14:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.288 14:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.288 14:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.288 14:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.288 14:09:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.288 [2024-11-27 14:09:41.220399] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:10.288 [2024-11-27 14:09:41.220528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59294 ] 00:10:10.547 [2024-11-27 14:09:41.380736] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:10.547 [2024-11-27 14:09:41.380795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.807 [2024-11-27 14:09:41.509352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.745 14:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.745 14:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:11.745 14:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59311 00:10:11.745 14:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:11.745 14:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59311 /var/tmp/spdk2.sock 00:10:11.745 14:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59311 ']' 00:10:11.745 14:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:11.745 14:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.745 14:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:11.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:11.745 14:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.745 14:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:11.745 [2024-11-27 14:09:42.510948] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:11.745 [2024-11-27 14:09:42.511069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59311 ] 00:10:11.745 [2024-11-27 14:09:42.689571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.005 [2024-11-27 14:09:42.928771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59311 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59311 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59294 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59294 ']' 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59294 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59294 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.592 killing process with pid 59294 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59294' 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59294 00:10:14.592 14:09:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59294 00:10:19.888 14:09:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59311 00:10:19.888 14:09:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59311 ']' 00:10:19.888 14:09:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59311 00:10:19.888 14:09:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:19.888 14:09:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.888 14:09:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59311 00:10:19.888 14:09:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.888 14:09:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.888 killing process with pid 59311 00:10:19.888 14:09:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59311' 00:10:19.888 14:09:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59311 00:10:19.888 14:09:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59311 00:10:22.438 00:10:22.438 real 0m11.926s 00:10:22.438 user 0m12.252s 00:10:22.438 sys 0m1.166s 00:10:22.438 14:09:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.438 14:09:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:22.438 ************************************ 00:10:22.438 END TEST locking_app_on_unlocked_coremask 00:10:22.438 ************************************ 00:10:22.438 14:09:53 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:22.438 14:09:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:22.438 14:09:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.438 14:09:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:22.438 ************************************ 00:10:22.438 START TEST locking_app_on_locked_coremask 00:10:22.438 ************************************ 00:10:22.438 14:09:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:22.438 14:09:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59465 00:10:22.438 14:09:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:22.438 14:09:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59465 /var/tmp/spdk.sock 00:10:22.438 14:09:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59465 ']' 00:10:22.438 14:09:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.438 14:09:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.439 14:09:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.439 14:09:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.439 14:09:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:22.439 [2024-11-27 14:09:53.205861] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:22.439 [2024-11-27 14:09:53.205974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59465 ] 00:10:22.439 [2024-11-27 14:09:53.382871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.697 [2024-11-27 14:09:53.503058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59486 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59486 /var/tmp/spdk2.sock 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59486 /var/tmp/spdk2.sock 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59486 /var/tmp/spdk2.sock 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59486 ']' 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:23.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.638 14:09:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:23.638 [2024-11-27 14:09:54.496026] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:23.638 [2024-11-27 14:09:54.496156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59486 ] 00:10:23.898 [2024-11-27 14:09:54.675460] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59465 has claimed it. 00:10:23.898 [2024-11-27 14:09:54.675536] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:24.467 ERROR: process (pid: 59486) is no longer running 00:10:24.467 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59486) - No such process 00:10:24.467 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.467 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:24.467 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:24.467 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:24.467 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:24.467 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:24.467 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59465 00:10:24.467 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59465 00:10:24.467 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:24.726 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59465 00:10:24.726 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59465 ']' 00:10:24.726 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59465 00:10:24.726 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:24.726 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.726 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59465 00:10:24.726 killing process with pid 59465 00:10:24.726 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.726 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.726 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59465' 00:10:24.726 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59465 00:10:24.726 14:09:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59465 00:10:27.391 ************************************ 00:10:27.391 END TEST locking_app_on_locked_coremask 00:10:27.391 ************************************ 00:10:27.391 00:10:27.391 real 0m4.951s 00:10:27.391 user 0m5.138s 00:10:27.391 sys 0m0.745s 00:10:27.391 14:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.391 14:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:27.391 14:09:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:27.391 14:09:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:27.391 14:09:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.391 14:09:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:27.391 ************************************ 00:10:27.391 START TEST locking_overlapped_coremask 00:10:27.391 ************************************ 00:10:27.391 14:09:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:27.391 14:09:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59558 00:10:27.391 14:09:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59558 /var/tmp/spdk.sock 00:10:27.391 14:09:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:27.391 14:09:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59558 ']' 00:10:27.391 14:09:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.391 14:09:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.391 14:09:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.391 14:09:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.391 14:09:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:27.391 [2024-11-27 14:09:58.222827] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:27.391 [2024-11-27 14:09:58.222962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59558 ] 00:10:27.651 [2024-11-27 14:09:58.399570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:27.651 [2024-11-27 14:09:58.528637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.651 [2024-11-27 14:09:58.528649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.651 [2024-11-27 14:09:58.528662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59576 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59576 /var/tmp/spdk2.sock 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59576 /var/tmp/spdk2.sock 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59576 /var/tmp/spdk2.sock 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59576 ']' 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:28.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.588 14:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:28.847 [2024-11-27 14:09:59.563775] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:28.847 [2024-11-27 14:09:59.563917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59576 ] 00:10:28.847 [2024-11-27 14:09:59.743690] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59558 has claimed it. 00:10:28.847 [2024-11-27 14:09:59.743953] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:29.418 ERROR: process (pid: 59576) is no longer running 00:10:29.418 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59576) - No such process 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59558 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59558 ']' 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59558 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59558 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.418 killing process with pid 59558 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59558' 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59558 00:10:29.418 14:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59558 00:10:31.957 00:10:31.957 real 0m4.762s 00:10:31.957 user 0m13.009s 00:10:31.957 sys 0m0.585s 00:10:31.957 14:10:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.957 14:10:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:31.957 ************************************ 00:10:31.957 END TEST locking_overlapped_coremask 00:10:31.957 ************************************ 00:10:32.216 14:10:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:32.216 14:10:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.216 14:10:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.216 14:10:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:32.216 ************************************ 00:10:32.216 START TEST locking_overlapped_coremask_via_rpc 00:10:32.216 ************************************ 00:10:32.216 14:10:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:32.217 14:10:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59645 00:10:32.217 14:10:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59645 /var/tmp/spdk.sock 00:10:32.217 14:10:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59645 ']' 00:10:32.217 14:10:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.217 14:10:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:32.217 14:10:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.217 14:10:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.217 14:10:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.217 14:10:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.217 [2024-11-27 14:10:03.049731] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:32.217 [2024-11-27 14:10:03.049858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59645 ] 00:10:32.475 [2024-11-27 14:10:03.226767] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:32.475 [2024-11-27 14:10:03.226826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:32.475 [2024-11-27 14:10:03.355735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.475 [2024-11-27 14:10:03.355858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.475 [2024-11-27 14:10:03.355923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.410 14:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.410 14:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:33.410 14:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59669 00:10:33.410 14:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59669 /var/tmp/spdk2.sock 00:10:33.410 14:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59669 ']' 00:10:33.410 14:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:33.410 14:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.411 14:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:33.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:33.411 14:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.411 14:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.411 14:10:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:33.670 [2024-11-27 14:10:04.428914] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:33.670 [2024-11-27 14:10:04.429062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59669 ] 00:10:33.670 [2024-11-27 14:10:04.617277] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:33.670 [2024-11-27 14:10:04.617511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:34.237 [2024-11-27 14:10:04.892686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.237 [2024-11-27 14:10:04.892700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.237 [2024-11-27 14:10:04.892709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:36.140 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.140 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:36.140 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.141 [2024-11-27 14:10:07.049340] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59645 has claimed it. 00:10:36.141 request: 00:10:36.141 { 00:10:36.141 "method": "framework_enable_cpumask_locks", 00:10:36.141 "req_id": 1 00:10:36.141 } 00:10:36.141 Got JSON-RPC error response 00:10:36.141 response: 00:10:36.141 { 00:10:36.141 "code": -32603, 00:10:36.141 "message": "Failed to claim CPU core: 2" 00:10:36.141 } 00:10:36.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59645 /var/tmp/spdk.sock 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59645 ']' 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.141 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:36.400 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.400 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:36.400 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59669 /var/tmp/spdk2.sock 00:10:36.400 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59669 ']' 00:10:36.400 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:36.400 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.400 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:36.400 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.400 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.659 ************************************ 00:10:36.659 END TEST locking_overlapped_coremask_via_rpc 00:10:36.659 ************************************ 00:10:36.659 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.659 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:36.659 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:36.659 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:36.659 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:36.659 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:36.659 00:10:36.659 real 0m4.584s 00:10:36.659 user 0m1.408s 00:10:36.659 sys 0m0.202s 00:10:36.659 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.659 14:10:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.659 14:10:07 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:36.659 14:10:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59645 ]] 00:10:36.659 14:10:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59645 00:10:36.659 14:10:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59645 ']' 00:10:36.659 14:10:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59645 00:10:36.659 14:10:07 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:36.659 14:10:07 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.659 14:10:07 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59645 00:10:36.920 killing process with pid 59645 00:10:36.920 14:10:07 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.920 14:10:07 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.920 14:10:07 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59645' 00:10:36.920 14:10:07 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59645 00:10:36.920 14:10:07 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59645 00:10:39.468 14:10:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59669 ]] 00:10:39.468 14:10:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59669 00:10:39.468 14:10:10 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59669 ']' 00:10:39.468 14:10:10 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59669 00:10:39.468 14:10:10 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:39.468 14:10:10 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.468 14:10:10 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59669 00:10:39.468 killing process with pid 59669 00:10:39.468 14:10:10 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:39.468 14:10:10 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:39.468 14:10:10 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59669' 00:10:39.468 14:10:10 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59669 00:10:39.468 14:10:10 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59669 00:10:42.035 14:10:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:42.035 Process with pid 59645 is not found 00:10:42.035 Process with pid 59669 is not found 00:10:42.035 14:10:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:42.035 14:10:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59645 ]] 00:10:42.035 14:10:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59645 00:10:42.035 14:10:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59645 ']' 00:10:42.035 14:10:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59645 00:10:42.035 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59645) - No such process 00:10:42.035 14:10:12 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59645 is not found' 00:10:42.035 14:10:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59669 ]] 00:10:42.035 14:10:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59669 00:10:42.035 14:10:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59669 ']' 00:10:42.035 14:10:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59669 00:10:42.035 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59669) - No such process 00:10:42.035 14:10:12 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59669 is not found' 00:10:42.035 14:10:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:42.035 00:10:42.035 real 0m52.896s 00:10:42.035 user 1m31.002s 00:10:42.035 sys 0m6.540s 00:10:42.035 14:10:12 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.035 ************************************ 00:10:42.035 END TEST cpu_locks 00:10:42.035 14:10:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:42.035 ************************************ 00:10:42.035 00:10:42.035 real 1m22.136s 00:10:42.035 user 2m27.846s 00:10:42.035 sys 0m10.666s 00:10:42.035 14:10:12 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.035 14:10:12 event -- common/autotest_common.sh@10 -- # set +x 00:10:42.035 ************************************ 00:10:42.035 END TEST event 00:10:42.035 ************************************ 00:10:42.294 14:10:12 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:42.294 14:10:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:42.294 14:10:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.294 14:10:12 -- common/autotest_common.sh@10 -- # set +x 00:10:42.294 ************************************ 00:10:42.294 START TEST thread 00:10:42.294 ************************************ 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:42.294 * Looking for test storage... 00:10:42.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.294 14:10:13 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.294 14:10:13 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.294 14:10:13 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.294 14:10:13 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.294 14:10:13 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.294 14:10:13 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.294 14:10:13 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.294 14:10:13 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.294 14:10:13 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.294 14:10:13 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.294 14:10:13 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.294 14:10:13 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:42.294 14:10:13 thread -- scripts/common.sh@345 -- # : 1 00:10:42.294 14:10:13 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.294 14:10:13 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.294 14:10:13 thread -- scripts/common.sh@365 -- # decimal 1 00:10:42.294 14:10:13 thread -- scripts/common.sh@353 -- # local d=1 00:10:42.294 14:10:13 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.294 14:10:13 thread -- scripts/common.sh@355 -- # echo 1 00:10:42.294 14:10:13 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.294 14:10:13 thread -- scripts/common.sh@366 -- # decimal 2 00:10:42.294 14:10:13 thread -- scripts/common.sh@353 -- # local d=2 00:10:42.294 14:10:13 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.294 14:10:13 thread -- scripts/common.sh@355 -- # echo 2 00:10:42.294 14:10:13 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.294 14:10:13 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.294 14:10:13 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.294 14:10:13 thread -- scripts/common.sh@368 -- # return 0 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.294 --rc genhtml_branch_coverage=1 00:10:42.294 --rc genhtml_function_coverage=1 00:10:42.294 --rc genhtml_legend=1 00:10:42.294 --rc geninfo_all_blocks=1 00:10:42.294 --rc geninfo_unexecuted_blocks=1 00:10:42.294 00:10:42.294 ' 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.294 --rc genhtml_branch_coverage=1 00:10:42.294 --rc genhtml_function_coverage=1 00:10:42.294 --rc genhtml_legend=1 00:10:42.294 --rc geninfo_all_blocks=1 00:10:42.294 --rc geninfo_unexecuted_blocks=1 00:10:42.294 00:10:42.294 ' 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.294 --rc genhtml_branch_coverage=1 00:10:42.294 --rc genhtml_function_coverage=1 00:10:42.294 --rc genhtml_legend=1 00:10:42.294 --rc geninfo_all_blocks=1 00:10:42.294 --rc geninfo_unexecuted_blocks=1 00:10:42.294 00:10:42.294 ' 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.294 --rc genhtml_branch_coverage=1 00:10:42.294 --rc genhtml_function_coverage=1 00:10:42.294 --rc genhtml_legend=1 00:10:42.294 --rc geninfo_all_blocks=1 00:10:42.294 --rc geninfo_unexecuted_blocks=1 00:10:42.294 00:10:42.294 ' 00:10:42.294 14:10:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.294 14:10:13 thread -- common/autotest_common.sh@10 -- # set +x 00:10:42.294 ************************************ 00:10:42.294 START TEST thread_poller_perf 00:10:42.294 ************************************ 00:10:42.294 14:10:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:42.552 [2024-11-27 14:10:13.279479] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:42.552 [2024-11-27 14:10:13.279653] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59864 ] 00:10:42.552 [2024-11-27 14:10:13.454948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.810 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:42.810 [2024-11-27 14:10:13.579283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.184 [2024-11-27T14:10:15.140Z] ====================================== 00:10:44.184 [2024-11-27T14:10:15.140Z] busy:2301939698 (cyc) 00:10:44.184 [2024-11-27T14:10:15.140Z] total_run_count: 344000 00:10:44.184 [2024-11-27T14:10:15.140Z] tsc_hz: 2290000000 (cyc) 00:10:44.184 [2024-11-27T14:10:15.140Z] ====================================== 00:10:44.184 [2024-11-27T14:10:15.140Z] poller_cost: 6691 (cyc), 2921 (nsec) 00:10:44.184 00:10:44.184 real 0m1.633s 00:10:44.184 user 0m1.431s 00:10:44.184 sys 0m0.092s 00:10:44.184 14:10:14 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.184 14:10:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:44.184 ************************************ 00:10:44.184 END TEST thread_poller_perf 00:10:44.184 ************************************ 00:10:44.184 14:10:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:44.184 14:10:14 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:44.184 14:10:14 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.184 14:10:14 thread -- common/autotest_common.sh@10 -- # set +x 00:10:44.184 ************************************ 00:10:44.184 START TEST thread_poller_perf 00:10:44.184 ************************************ 00:10:44.184 14:10:14 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:44.184 [2024-11-27 14:10:14.953751] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:44.184 [2024-11-27 14:10:14.953914] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:10:44.442 [2024-11-27 14:10:15.147301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.442 [2024-11-27 14:10:15.283899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.442 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:45.816 [2024-11-27T14:10:16.772Z] ====================================== 00:10:45.816 [2024-11-27T14:10:16.772Z] busy:2294682250 (cyc) 00:10:45.816 [2024-11-27T14:10:16.772Z] total_run_count: 4079000 00:10:45.816 [2024-11-27T14:10:16.772Z] tsc_hz: 2290000000 (cyc) 00:10:45.816 [2024-11-27T14:10:16.772Z] ====================================== 00:10:45.816 [2024-11-27T14:10:16.772Z] poller_cost: 562 (cyc), 245 (nsec) 00:10:45.816 00:10:45.816 real 0m1.650s 00:10:45.816 user 0m1.430s 00:10:45.816 sys 0m0.109s 00:10:45.816 14:10:16 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.816 14:10:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:45.816 ************************************ 00:10:45.816 END TEST thread_poller_perf 00:10:45.816 ************************************ 00:10:45.816 14:10:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:45.816 00:10:45.816 real 0m3.602s 00:10:45.816 user 0m3.024s 00:10:45.816 sys 0m0.372s 00:10:45.816 14:10:16 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.816 14:10:16 thread -- common/autotest_common.sh@10 -- # set +x 00:10:45.816 ************************************ 00:10:45.816 END TEST thread 00:10:45.816 ************************************ 00:10:45.816 14:10:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:45.817 14:10:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:45.817 14:10:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.817 14:10:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.817 14:10:16 -- common/autotest_common.sh@10 -- # set +x 00:10:45.817 ************************************ 00:10:45.817 START TEST app_cmdline 00:10:45.817 ************************************ 00:10:45.817 14:10:16 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:45.817 * Looking for test storage... 00:10:45.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:45.817 14:10:16 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:45.817 14:10:16 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:45.817 14:10:16 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.075 14:10:16 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.075 14:10:16 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:46.075 14:10:16 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.075 14:10:16 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.075 --rc genhtml_branch_coverage=1 00:10:46.075 --rc genhtml_function_coverage=1 00:10:46.075 --rc genhtml_legend=1 00:10:46.075 --rc geninfo_all_blocks=1 00:10:46.075 --rc geninfo_unexecuted_blocks=1 00:10:46.075 00:10:46.075 ' 00:10:46.075 14:10:16 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.075 --rc genhtml_branch_coverage=1 00:10:46.075 --rc genhtml_function_coverage=1 00:10:46.075 --rc genhtml_legend=1 00:10:46.075 --rc geninfo_all_blocks=1 00:10:46.075 --rc geninfo_unexecuted_blocks=1 00:10:46.075 00:10:46.075 ' 00:10:46.075 14:10:16 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.075 --rc genhtml_branch_coverage=1 00:10:46.075 --rc genhtml_function_coverage=1 00:10:46.075 --rc genhtml_legend=1 00:10:46.075 --rc geninfo_all_blocks=1 00:10:46.075 --rc geninfo_unexecuted_blocks=1 00:10:46.075 00:10:46.075 ' 00:10:46.075 14:10:16 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.075 --rc genhtml_branch_coverage=1 00:10:46.075 --rc genhtml_function_coverage=1 00:10:46.075 --rc genhtml_legend=1 00:10:46.075 --rc geninfo_all_blocks=1 00:10:46.075 --rc geninfo_unexecuted_blocks=1 00:10:46.075 00:10:46.075 ' 00:10:46.075 14:10:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:46.075 14:10:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59995 00:10:46.076 14:10:16 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:46.076 14:10:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59995 00:10:46.076 14:10:16 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59995 ']' 00:10:46.076 14:10:16 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.076 14:10:16 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.076 14:10:16 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.076 14:10:16 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.076 14:10:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:46.076 [2024-11-27 14:10:16.930279] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:46.076 [2024-11-27 14:10:16.930415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59995 ] 00:10:46.334 [2024-11-27 14:10:17.108382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.334 [2024-11-27 14:10:17.248658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:47.710 14:10:18 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:47.710 { 00:10:47.710 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:10:47.710 "fields": { 00:10:47.710 "major": 25, 00:10:47.710 "minor": 1, 00:10:47.710 "patch": 0, 00:10:47.710 "suffix": "-pre", 00:10:47.710 "commit": "35cd3e84d" 00:10:47.710 } 00:10:47.710 } 00:10:47.710 14:10:18 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:47.710 14:10:18 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:47.710 14:10:18 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:47.710 14:10:18 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:47.710 14:10:18 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:47.710 14:10:18 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:47.710 14:10:18 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.710 14:10:18 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:47.710 14:10:18 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:47.710 14:10:18 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:47.710 14:10:18 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:47.968 request: 00:10:47.968 { 00:10:47.968 "method": "env_dpdk_get_mem_stats", 00:10:47.968 "req_id": 1 00:10:47.968 } 00:10:47.968 Got JSON-RPC error response 00:10:47.968 response: 00:10:47.969 { 00:10:47.969 "code": -32601, 00:10:47.969 "message": "Method not found" 00:10:47.969 } 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:47.969 14:10:18 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59995 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59995 ']' 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59995 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59995 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.969 killing process with pid 59995 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59995' 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@973 -- # kill 59995 00:10:47.969 14:10:18 app_cmdline -- common/autotest_common.sh@978 -- # wait 59995 00:10:51.248 00:10:51.248 real 0m5.186s 00:10:51.248 user 0m5.648s 00:10:51.248 sys 0m0.584s 00:10:51.248 14:10:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.248 14:10:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:51.248 ************************************ 00:10:51.248 END TEST app_cmdline 00:10:51.248 ************************************ 00:10:51.248 14:10:21 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:51.248 14:10:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.248 14:10:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.248 14:10:21 -- common/autotest_common.sh@10 -- # set +x 00:10:51.248 ************************************ 00:10:51.248 START TEST version 00:10:51.248 ************************************ 00:10:51.248 14:10:21 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:51.248 * Looking for test storage... 00:10:51.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:51.248 14:10:21 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:51.248 14:10:21 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:51.248 14:10:21 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:51.248 14:10:22 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:51.248 14:10:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.248 14:10:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.248 14:10:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.248 14:10:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.248 14:10:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.248 14:10:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.248 14:10:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.248 14:10:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.248 14:10:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.248 14:10:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.248 14:10:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.248 14:10:22 version -- scripts/common.sh@344 -- # case "$op" in 00:10:51.248 14:10:22 version -- scripts/common.sh@345 -- # : 1 00:10:51.248 14:10:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.248 14:10:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.248 14:10:22 version -- scripts/common.sh@365 -- # decimal 1 00:10:51.248 14:10:22 version -- scripts/common.sh@353 -- # local d=1 00:10:51.248 14:10:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.248 14:10:22 version -- scripts/common.sh@355 -- # echo 1 00:10:51.248 14:10:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.249 14:10:22 version -- scripts/common.sh@366 -- # decimal 2 00:10:51.249 14:10:22 version -- scripts/common.sh@353 -- # local d=2 00:10:51.249 14:10:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.249 14:10:22 version -- scripts/common.sh@355 -- # echo 2 00:10:51.249 14:10:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.249 14:10:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.249 14:10:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.249 14:10:22 version -- scripts/common.sh@368 -- # return 0 00:10:51.249 14:10:22 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.249 14:10:22 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:51.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.249 --rc genhtml_branch_coverage=1 00:10:51.249 --rc genhtml_function_coverage=1 00:10:51.249 --rc genhtml_legend=1 00:10:51.249 --rc geninfo_all_blocks=1 00:10:51.249 --rc geninfo_unexecuted_blocks=1 00:10:51.249 00:10:51.249 ' 00:10:51.249 14:10:22 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:51.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.249 --rc genhtml_branch_coverage=1 00:10:51.249 --rc genhtml_function_coverage=1 00:10:51.249 --rc genhtml_legend=1 00:10:51.249 --rc geninfo_all_blocks=1 00:10:51.249 --rc geninfo_unexecuted_blocks=1 00:10:51.249 00:10:51.249 ' 00:10:51.249 14:10:22 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:51.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.249 --rc genhtml_branch_coverage=1 00:10:51.249 --rc genhtml_function_coverage=1 00:10:51.249 --rc genhtml_legend=1 00:10:51.249 --rc geninfo_all_blocks=1 00:10:51.249 --rc geninfo_unexecuted_blocks=1 00:10:51.249 00:10:51.249 ' 00:10:51.249 14:10:22 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:51.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.249 --rc genhtml_branch_coverage=1 00:10:51.249 --rc genhtml_function_coverage=1 00:10:51.249 --rc genhtml_legend=1 00:10:51.249 --rc geninfo_all_blocks=1 00:10:51.249 --rc geninfo_unexecuted_blocks=1 00:10:51.249 00:10:51.249 ' 00:10:51.249 14:10:22 version -- app/version.sh@17 -- # get_header_version major 00:10:51.249 14:10:22 version -- app/version.sh@14 -- # cut -f2 00:10:51.249 14:10:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:51.249 14:10:22 version -- app/version.sh@14 -- # tr -d '"' 00:10:51.249 14:10:22 version -- app/version.sh@17 -- # major=25 00:10:51.249 14:10:22 version -- app/version.sh@18 -- # get_header_version minor 00:10:51.249 14:10:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:51.249 14:10:22 version -- app/version.sh@14 -- # cut -f2 00:10:51.249 14:10:22 version -- app/version.sh@14 -- # tr -d '"' 00:10:51.249 14:10:22 version -- app/version.sh@18 -- # minor=1 00:10:51.249 14:10:22 version -- app/version.sh@19 -- # get_header_version patch 00:10:51.249 14:10:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:51.249 14:10:22 version -- app/version.sh@14 -- # cut -f2 00:10:51.249 14:10:22 version -- app/version.sh@14 -- # tr -d '"' 00:10:51.249 14:10:22 version -- app/version.sh@19 -- # patch=0 00:10:51.249 14:10:22 version -- app/version.sh@20 -- # get_header_version suffix 00:10:51.249 14:10:22 version -- app/version.sh@14 -- # cut -f2 00:10:51.249 14:10:22 version -- app/version.sh@14 -- # tr -d '"' 00:10:51.249 14:10:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:51.249 14:10:22 version -- app/version.sh@20 -- # suffix=-pre 00:10:51.249 14:10:22 version -- app/version.sh@22 -- # version=25.1 00:10:51.249 14:10:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:51.249 14:10:22 version -- app/version.sh@28 -- # version=25.1rc0 00:10:51.249 14:10:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:51.249 14:10:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:51.249 14:10:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:51.249 14:10:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:51.249 00:10:51.249 real 0m0.241s 00:10:51.249 user 0m0.162s 00:10:51.249 sys 0m0.117s 00:10:51.249 14:10:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.249 14:10:22 version -- common/autotest_common.sh@10 -- # set +x 00:10:51.249 ************************************ 00:10:51.249 END TEST version 00:10:51.249 ************************************ 00:10:51.249 14:10:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:51.249 14:10:22 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:10:51.249 14:10:22 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:51.249 14:10:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.249 14:10:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.249 14:10:22 -- common/autotest_common.sh@10 -- # set +x 00:10:51.249 ************************************ 00:10:51.249 START TEST bdev_raid 00:10:51.249 ************************************ 00:10:51.249 14:10:22 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:51.507 * Looking for test storage... 00:10:51.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:51.507 14:10:22 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:51.507 14:10:22 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:10:51.507 14:10:22 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:51.507 14:10:22 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@345 -- # : 1 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.507 14:10:22 bdev_raid -- scripts/common.sh@368 -- # return 0 00:10:51.507 14:10:22 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.507 14:10:22 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:51.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.507 --rc genhtml_branch_coverage=1 00:10:51.507 --rc genhtml_function_coverage=1 00:10:51.507 --rc genhtml_legend=1 00:10:51.507 --rc geninfo_all_blocks=1 00:10:51.507 --rc geninfo_unexecuted_blocks=1 00:10:51.507 00:10:51.507 ' 00:10:51.507 14:10:22 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:51.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.507 --rc genhtml_branch_coverage=1 00:10:51.507 --rc genhtml_function_coverage=1 00:10:51.507 --rc genhtml_legend=1 00:10:51.507 --rc geninfo_all_blocks=1 00:10:51.507 --rc geninfo_unexecuted_blocks=1 00:10:51.507 00:10:51.507 ' 00:10:51.507 14:10:22 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:51.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.507 --rc genhtml_branch_coverage=1 00:10:51.507 --rc genhtml_function_coverage=1 00:10:51.507 --rc genhtml_legend=1 00:10:51.507 --rc geninfo_all_blocks=1 00:10:51.507 --rc geninfo_unexecuted_blocks=1 00:10:51.507 00:10:51.507 ' 00:10:51.508 14:10:22 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:51.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.508 --rc genhtml_branch_coverage=1 00:10:51.508 --rc genhtml_function_coverage=1 00:10:51.508 --rc genhtml_legend=1 00:10:51.508 --rc geninfo_all_blocks=1 00:10:51.508 --rc geninfo_unexecuted_blocks=1 00:10:51.508 00:10:51.508 ' 00:10:51.508 14:10:22 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:51.508 14:10:22 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:10:51.508 14:10:22 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:10:51.508 14:10:22 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:10:51.508 14:10:22 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:10:51.508 14:10:22 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:10:51.508 14:10:22 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:10:51.508 14:10:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.508 14:10:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.508 14:10:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.508 ************************************ 00:10:51.508 START TEST raid1_resize_data_offset_test 00:10:51.508 ************************************ 00:10:51.508 14:10:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:10:51.508 14:10:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60188 00:10:51.508 Process raid pid: 60188 00:10:51.508 14:10:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60188' 00:10:51.508 14:10:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60188 00:10:51.508 14:10:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60188 ']' 00:10:51.508 14:10:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.508 14:10:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.508 14:10:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.508 14:10:22 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:51.508 14:10:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.508 14:10:22 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.508 [2024-11-27 14:10:22.450954] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:51.508 [2024-11-27 14:10:22.451615] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.766 [2024-11-27 14:10:22.621056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.024 [2024-11-27 14:10:22.759330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.312 [2024-11-27 14:10:22.996299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.312 [2024-11-27 14:10:22.996342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.569 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.569 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:10:52.569 14:10:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:10:52.569 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.569 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.569 malloc0 00:10:52.569 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.569 14:10:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:10:52.569 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.569 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.826 malloc1 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.826 null0 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.826 [2024-11-27 14:10:23.559453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:10:52.826 [2024-11-27 14:10:23.561610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:52.826 [2024-11-27 14:10:23.561677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:10:52.826 [2024-11-27 14:10:23.561866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:52.826 [2024-11-27 14:10:23.561884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:10:52.826 [2024-11-27 14:10:23.562227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:52.826 [2024-11-27 14:10:23.562449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:52.826 [2024-11-27 14:10:23.562471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:52.826 [2024-11-27 14:10:23.562673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.826 [2024-11-27 14:10:23.615383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.826 14:10:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.391 malloc2 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.391 [2024-11-27 14:10:24.237984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:53.391 [2024-11-27 14:10:24.255853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.391 [2024-11-27 14:10:24.257821] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60188 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60188 ']' 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60188 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60188 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60188' 00:10:53.391 killing process with pid 60188 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60188 00:10:53.391 [2024-11-27 14:10:24.338641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.391 14:10:24 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60188 00:10:53.391 [2024-11-27 14:10:24.338878] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:10:53.391 [2024-11-27 14:10:24.338975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.391 [2024-11-27 14:10:24.339034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:10:53.649 [2024-11-27 14:10:24.378234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.649 [2024-11-27 14:10:24.378657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.649 [2024-11-27 14:10:24.378682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:55.607 [2024-11-27 14:10:26.237471] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.544 14:10:27 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:10:56.544 00:10:56.544 real 0m5.116s 00:10:56.544 user 0m5.065s 00:10:56.544 sys 0m0.523s 00:10:56.544 ************************************ 00:10:56.544 END TEST raid1_resize_data_offset_test 00:10:56.544 ************************************ 00:10:56.544 14:10:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.544 14:10:27 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.804 14:10:27 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:10:56.804 14:10:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:56.804 14:10:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.804 14:10:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.804 ************************************ 00:10:56.804 START TEST raid0_resize_superblock_test 00:10:56.804 ************************************ 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60277 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60277' 00:10:56.804 Process raid pid: 60277 00:10:56.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60277 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60277 ']' 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.804 14:10:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.804 [2024-11-27 14:10:27.613120] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:10:56.804 [2024-11-27 14:10:27.613250] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.064 [2024-11-27 14:10:27.789582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.064 [2024-11-27 14:10:27.910047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.323 [2024-11-27 14:10:28.119866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.323 [2024-11-27 14:10:28.119912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.892 14:10:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.892 14:10:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:57.892 14:10:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:10:57.892 14:10:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.892 14:10:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.458 malloc0 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.458 [2024-11-27 14:10:29.184146] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:58.458 [2024-11-27 14:10:29.184224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.458 [2024-11-27 14:10:29.184252] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:58.458 [2024-11-27 14:10:29.184267] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.458 [2024-11-27 14:10:29.186745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.458 [2024-11-27 14:10:29.186791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:58.458 pt0 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.458 c86bdc87-c64c-49bd-87cb-8b0a8a4a64ce 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.458 b118e7d5-29e1-46a8-b751-74fb7172e78e 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.458 098a9f61-c6e8-4a11-984d-d258dcee75ad 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.458 [2024-11-27 14:10:29.311707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b118e7d5-29e1-46a8-b751-74fb7172e78e is claimed 00:10:58.458 [2024-11-27 14:10:29.311822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 098a9f61-c6e8-4a11-984d-d258dcee75ad is claimed 00:10:58.458 [2024-11-27 14:10:29.311982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:58.458 [2024-11-27 14:10:29.312000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:10:58.458 [2024-11-27 14:10:29.312354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:58.458 [2024-11-27 14:10:29.312585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:58.458 [2024-11-27 14:10:29.312598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:58.458 [2024-11-27 14:10:29.312799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.458 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:10:58.459 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:58.459 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.459 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:10:58.459 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.459 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.716 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:10:58.716 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:58.716 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:58.716 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.716 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.717 [2024-11-27 14:10:29.419820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.717 [2024-11-27 14:10:29.463697] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:58.717 [2024-11-27 14:10:29.463792] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b118e7d5-29e1-46a8-b751-74fb7172e78e' was resized: old size 131072, new size 204800 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.717 [2024-11-27 14:10:29.475611] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:58.717 [2024-11-27 14:10:29.475694] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '098a9f61-c6e8-4a11-984d-d258dcee75ad' was resized: old size 131072, new size 204800 00:10:58.717 [2024-11-27 14:10:29.475763] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:10:58.717 [2024-11-27 14:10:29.583556] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.717 [2024-11-27 14:10:29.631245] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:10:58.717 [2024-11-27 14:10:29.631379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:10:58.717 [2024-11-27 14:10:29.631419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.717 [2024-11-27 14:10:29.631469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:10:58.717 [2024-11-27 14:10:29.631614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.717 [2024-11-27 14:10:29.631688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.717 [2024-11-27 14:10:29.631744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.717 [2024-11-27 14:10:29.639087] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:58.717 [2024-11-27 14:10:29.639168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.717 [2024-11-27 14:10:29.639192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:58.717 [2024-11-27 14:10:29.639205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.717 [2024-11-27 14:10:29.641749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.717 [2024-11-27 14:10:29.641795] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:58.717 [2024-11-27 14:10:29.643718] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b118e7d5-29e1-46a8-b751-74fb7172e78e 00:10:58.717 [2024-11-27 14:10:29.643807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b118e7d5-29e1-46a8-b751-74fb7172e78e is claimed 00:10:58.717 [2024-11-27 14:10:29.643923] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 098a9f61-c6e8-4a11-984d-d258dcee75ad 00:10:58.717 [2024-11-27 14:10:29.643944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 098a9f61-c6e8-4a11-984d-d258dcee75ad is claimed 00:10:58.717 [2024-11-27 14:10:29.644185] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 098a9f61-c6e8-4a11-984d-d258dcee75ad (2) smaller than existing raid bdev Raid (3) 00:10:58.717 [2024-11-27 14:10:29.644217] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b118e7d5-29e1-46a8-b751-74fb7172e78e: File exists 00:10:58.717 [2024-11-27 14:10:29.644262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:58.717 [2024-11-27 14:10:29.644276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:10:58.717 pt0 00:10:58.717 [2024-11-27 14:10:29.644582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:58.717 [2024-11-27 14:10:29.644763] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:58.717 [2024-11-27 14:10:29.644773] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.717 [2024-11-27 14:10:29.644941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:58.717 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:58.718 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:10:58.718 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.718 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.718 [2024-11-27 14:10:29.660050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60277 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60277 ']' 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60277 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60277 00:10:58.975 killing process with pid 60277 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60277' 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60277 00:10:58.975 [2024-11-27 14:10:29.733197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.975 14:10:29 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60277 00:10:58.975 [2024-11-27 14:10:29.733293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.975 [2024-11-27 14:10:29.733348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.975 [2024-11-27 14:10:29.733359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:11:00.876 [2024-11-27 14:10:31.478525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.267 14:10:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:11:02.267 00:11:02.267 real 0m5.313s 00:11:02.267 user 0m5.585s 00:11:02.267 sys 0m0.550s 00:11:02.267 ************************************ 00:11:02.267 END TEST raid0_resize_superblock_test 00:11:02.267 ************************************ 00:11:02.267 14:10:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.267 14:10:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.267 14:10:32 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:11:02.267 14:10:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.267 14:10:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.267 14:10:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.267 ************************************ 00:11:02.267 START TEST raid1_resize_superblock_test 00:11:02.267 ************************************ 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60387 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60387' 00:11:02.267 Process raid pid: 60387 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60387 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60387 ']' 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.267 14:10:32 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.267 [2024-11-27 14:10:32.993976] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:02.267 [2024-11-27 14:10:32.994295] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.267 [2024-11-27 14:10:33.172984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.526 [2024-11-27 14:10:33.310615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.785 [2024-11-27 14:10:33.546467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.785 [2024-11-27 14:10:33.546517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.042 14:10:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.042 14:10:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:03.042 14:10:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:11:03.042 14:10:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.042 14:10:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.608 malloc0 00:11:03.608 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.608 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:03.608 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.608 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.608 [2024-11-27 14:10:34.488454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:03.608 [2024-11-27 14:10:34.488592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.608 [2024-11-27 14:10:34.488622] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:03.608 [2024-11-27 14:10:34.488637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.608 [2024-11-27 14:10:34.491377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.608 [2024-11-27 14:10:34.491424] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:03.608 pt0 00:11:03.608 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.608 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:11:03.608 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.608 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.867 b9878bed-c4ac-4231-805f-c604364b6faa 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.867 cc36a170-3c1f-4d8f-8b31-547a9c9d8006 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.867 7bee1b60-a0a5-48a0-b001-309eecb884c5 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.867 [2024-11-27 14:10:34.615965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cc36a170-3c1f-4d8f-8b31-547a9c9d8006 is claimed 00:11:03.867 [2024-11-27 14:10:34.616154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7bee1b60-a0a5-48a0-b001-309eecb884c5 is claimed 00:11:03.867 [2024-11-27 14:10:34.616346] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:03.867 [2024-11-27 14:10:34.616366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:11:03.867 [2024-11-27 14:10:34.616686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:03.867 [2024-11-27 14:10:34.616923] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:03.867 [2024-11-27 14:10:34.616936] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:03.867 [2024-11-27 14:10:34.617162] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:11:03.867 [2024-11-27 14:10:34.712168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.867 [2024-11-27 14:10:34.756001] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:03.867 [2024-11-27 14:10:34.756038] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cc36a170-3c1f-4d8f-8b31-547a9c9d8006' was resized: old size 131072, new size 204800 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.867 [2024-11-27 14:10:34.767915] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:03.867 [2024-11-27 14:10:34.767945] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7bee1b60-a0a5-48a0-b001-309eecb884c5' was resized: old size 131072, new size 204800 00:11:03.867 [2024-11-27 14:10:34.767982] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.867 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 [2024-11-27 14:10:34.863853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 [2024-11-27 14:10:34.911522] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:04.126 [2024-11-27 14:10:34.911666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:04.126 [2024-11-27 14:10:34.911731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:04.126 [2024-11-27 14:10:34.911940] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.126 [2024-11-27 14:10:34.912251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.126 [2024-11-27 14:10:34.912381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.126 [2024-11-27 14:10:34.912445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 [2024-11-27 14:10:34.919382] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:04.126 [2024-11-27 14:10:34.919483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.126 [2024-11-27 14:10:34.919533] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:11:04.126 [2024-11-27 14:10:34.919574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.126 [2024-11-27 14:10:34.922147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.126 [2024-11-27 14:10:34.922248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:04.126 [2024-11-27 14:10:34.924224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cc36a170-3c1f-4d8f-8b31-547a9c9d8006 00:11:04.126 [2024-11-27 14:10:34.924366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cc36a170-3c1f-4d8f-8b31-547a9c9d8006 is claimed 00:11:04.126 [2024-11-27 14:10:34.924559] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7bee1b60-a0a5-48a0-b001-309eecb884c5 00:11:04.126 [2024-11-27 14:10:34.924630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7bee1b60-a0a5-48a0-b001-309eecb884c5 is claimed 00:11:04.126 [2024-11-27 14:10:34.924860] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 7bee1b60-a0a5-48pt0 00:11:04.126 a0-b001-309eecb884c5 (2) smaller than existing raid bdev Raid (3) 00:11:04.126 [2024-11-27 14:10:34.924928] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev cc36a170-3c1f-4d8f-8b31-547a9c9d8006: File exists 00:11:04.126 [2024-11-27 14:10:34.924975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:04.126 [2024-11-27 14:10:34.924989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:04.126 [2024-11-27 14:10:34.925299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 [2024-11-27 14:10:34.925487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:04.126 [2024-11-27 14:10:34.925498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:11:04.126 [2024-11-27 14:10:34.925666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:11:04.126 [2024-11-27 14:10:34.940560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60387 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60387 ']' 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60387 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:04.126 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.127 14:10:34 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60387 00:11:04.127 killing process with pid 60387 00:11:04.127 14:10:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.127 14:10:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.127 14:10:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60387' 00:11:04.127 14:10:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60387 00:11:04.127 [2024-11-27 14:10:35.015715] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:04.127 [2024-11-27 14:10:35.015818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.127 14:10:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60387 00:11:04.127 [2024-11-27 14:10:35.015883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.127 [2024-11-27 14:10:35.015894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:11:06.025 [2024-11-27 14:10:36.676537] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.399 14:10:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:11:07.399 00:11:07.399 real 0m5.103s 00:11:07.399 user 0m5.275s 00:11:07.399 sys 0m0.585s 00:11:07.399 ************************************ 00:11:07.399 END TEST raid1_resize_superblock_test 00:11:07.399 ************************************ 00:11:07.399 14:10:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.399 14:10:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.399 14:10:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:11:07.399 14:10:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:11:07.399 14:10:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:11:07.399 14:10:38 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:11:07.399 14:10:38 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:11:07.399 14:10:38 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:11:07.399 14:10:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:07.399 14:10:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.399 14:10:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.399 ************************************ 00:11:07.399 START TEST raid_function_test_raid0 00:11:07.399 ************************************ 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60495 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60495' 00:11:07.399 Process raid pid: 60495 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60495 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60495 ']' 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.399 14:10:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:07.399 [2024-11-27 14:10:38.166559] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:07.399 [2024-11-27 14:10:38.166794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.658 [2024-11-27 14:10:38.384824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.658 [2024-11-27 14:10:38.518320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.918 [2024-11-27 14:10:38.760951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.918 [2024-11-27 14:10:38.761097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:08.177 Base_1 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:08.177 Base_2 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.177 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:08.177 [2024-11-27 14:10:39.110021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:08.178 [2024-11-27 14:10:39.111830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:08.178 [2024-11-27 14:10:39.111898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:08.178 [2024-11-27 14:10:39.111910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:08.178 [2024-11-27 14:10:39.112230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:08.178 [2024-11-27 14:10:39.112425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:08.178 [2024-11-27 14:10:39.112435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:11:08.178 [2024-11-27 14:10:39.112623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.178 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.178 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:08.178 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.178 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:11:08.178 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:08.437 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.437 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:11:08.437 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:11:08.437 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:11:08.437 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:08.437 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:08.437 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:08.438 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:08.438 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:08.438 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:11:08.438 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:08.438 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:08.438 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:11:08.438 [2024-11-27 14:10:39.361658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:08.438 /dev/nbd0 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.748 1+0 records in 00:11:08.748 1+0 records out 00:11:08.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473553 s, 8.6 MB/s 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:08.748 { 00:11:08.748 "nbd_device": "/dev/nbd0", 00:11:08.748 "bdev_name": "raid" 00:11:08.748 } 00:11:08.748 ]' 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:08.748 { 00:11:08.748 "nbd_device": "/dev/nbd0", 00:11:08.748 "bdev_name": "raid" 00:11:08.748 } 00:11:08.748 ]' 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:08.748 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:09.009 4096+0 records in 00:11:09.009 4096+0 records out 00:11:09.009 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0279127 s, 75.1 MB/s 00:11:09.009 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:09.269 4096+0 records in 00:11:09.269 4096+0 records out 00:11:09.269 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.219278 s, 9.6 MB/s 00:11:09.269 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:11:09.269 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:09.269 14:10:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:09.269 128+0 records in 00:11:09.269 128+0 records out 00:11:09.269 65536 bytes (66 kB, 64 KiB) copied, 0.00134059 s, 48.9 MB/s 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:09.269 2035+0 records in 00:11:09.269 2035+0 records out 00:11:09.269 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0148447 s, 70.2 MB/s 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:09.269 456+0 records in 00:11:09.269 456+0 records out 00:11:09.269 233472 bytes (233 kB, 228 KiB) copied, 0.0028182 s, 82.8 MB/s 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:09.269 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:11:09.270 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:09.270 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:09.529 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:09.529 [2024-11-27 14:10:40.338966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.529 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:09.529 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:09.529 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:09.529 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:09.529 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:09.529 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:11:09.529 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:11:09.529 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:11:09.529 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:09.529 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60495 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60495 ']' 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60495 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60495 00:11:09.789 killing process with pid 60495 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60495' 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60495 00:11:09.789 [2024-11-27 14:10:40.664313] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.789 [2024-11-27 14:10:40.664423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.789 [2024-11-27 14:10:40.664471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.789 [2024-11-27 14:10:40.664492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:11:09.789 14:10:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60495 00:11:10.050 [2024-11-27 14:10:40.880333] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.429 14:10:42 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:11:11.429 00:11:11.429 real 0m3.963s 00:11:11.429 user 0m4.606s 00:11:11.429 sys 0m0.971s 00:11:11.429 14:10:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.429 14:10:42 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:11.429 ************************************ 00:11:11.429 END TEST raid_function_test_raid0 00:11:11.429 ************************************ 00:11:11.429 14:10:42 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:11:11.429 14:10:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.429 14:10:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.429 14:10:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.429 ************************************ 00:11:11.429 START TEST raid_function_test_concat 00:11:11.429 ************************************ 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60619 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60619' 00:11:11.429 Process raid pid: 60619 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60619 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60619 ']' 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.429 14:10:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:11.429 [2024-11-27 14:10:42.194506] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:11.429 [2024-11-27 14:10:42.194673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.429 [2024-11-27 14:10:42.366281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.689 [2024-11-27 14:10:42.492741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.946 [2024-11-27 14:10:42.705730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.946 [2024-11-27 14:10:42.705860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:12.206 Base_1 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:12.206 Base_2 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:12.206 [2024-11-27 14:10:43.136323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:12.206 [2024-11-27 14:10:43.138187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:12.206 [2024-11-27 14:10:43.138254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:12.206 [2024-11-27 14:10:43.138265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:12.206 [2024-11-27 14:10:43.138512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:12.206 [2024-11-27 14:10:43.138657] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:12.206 [2024-11-27 14:10:43.138665] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:11:12.206 [2024-11-27 14:10:43.138824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:11:12.206 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:12.464 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:11:12.464 [2024-11-27 14:10:43.388197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:12.464 /dev/nbd0 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:12.723 1+0 records in 00:11:12.723 1+0 records out 00:11:12.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305688 s, 13.4 MB/s 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:12.723 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:12.983 { 00:11:12.983 "nbd_device": "/dev/nbd0", 00:11:12.983 "bdev_name": "raid" 00:11:12.983 } 00:11:12.983 ]' 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:12.983 { 00:11:12.983 "nbd_device": "/dev/nbd0", 00:11:12.983 "bdev_name": "raid" 00:11:12.983 } 00:11:12.983 ]' 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:11:12.983 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:11:12.984 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:11:12.984 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:11:12.984 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:11:12.984 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:11:12.984 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:11:12.984 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:11:12.984 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:11:12.984 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:11:12.984 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:12.984 4096+0 records in 00:11:12.984 4096+0 records out 00:11:12.984 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0359584 s, 58.3 MB/s 00:11:12.984 14:10:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:13.243 4096+0 records in 00:11:13.243 4096+0 records out 00:11:13.243 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.223929 s, 9.4 MB/s 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:13.243 128+0 records in 00:11:13.243 128+0 records out 00:11:13.243 65536 bytes (66 kB, 64 KiB) copied, 0.000517817 s, 127 MB/s 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:13.243 2035+0 records in 00:11:13.243 2035+0 records out 00:11:13.243 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0130227 s, 80.0 MB/s 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:13.243 456+0 records in 00:11:13.243 456+0 records out 00:11:13.243 233472 bytes (233 kB, 228 KiB) copied, 0.00174929 s, 133 MB/s 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.243 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:13.503 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:13.503 [2024-11-27 14:10:44.369067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.503 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:13.503 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:13.503 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.503 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.503 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:13.503 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:11:13.503 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.503 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:11:13.503 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:11:13.503 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60619 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60619 ']' 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60619 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.763 14:10:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60619 00:11:14.023 killing process with pid 60619 00:11:14.023 14:10:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.023 14:10:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.023 14:10:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60619' 00:11:14.023 14:10:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60619 00:11:14.023 [2024-11-27 14:10:44.743609] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.023 [2024-11-27 14:10:44.743710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.023 [2024-11-27 14:10:44.743765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.023 14:10:44 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60619 00:11:14.023 [2024-11-27 14:10:44.743777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:11:14.023 [2024-11-27 14:10:44.960937] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.463 14:10:46 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:11:15.463 00:11:15.463 real 0m4.034s 00:11:15.463 user 0m4.702s 00:11:15.463 sys 0m0.985s 00:11:15.463 14:10:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.463 14:10:46 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:15.463 ************************************ 00:11:15.463 END TEST raid_function_test_concat 00:11:15.463 ************************************ 00:11:15.463 14:10:46 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:11:15.463 14:10:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.463 14:10:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.463 14:10:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.463 ************************************ 00:11:15.463 START TEST raid0_resize_test 00:11:15.463 ************************************ 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60748 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60748' 00:11:15.463 Process raid pid: 60748 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60748 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60748 ']' 00:11:15.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.463 14:10:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.463 [2024-11-27 14:10:46.295170] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:15.463 [2024-11-27 14:10:46.295400] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.723 [2024-11-27 14:10:46.453773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.723 [2024-11-27 14:10:46.575887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.983 [2024-11-27 14:10:46.783555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.983 [2024-11-27 14:10:46.783615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 Base_1 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 Base_2 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 [2024-11-27 14:10:47.171727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:16.243 [2024-11-27 14:10:47.174228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:16.243 [2024-11-27 14:10:47.174298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:16.243 [2024-11-27 14:10:47.174316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:16.243 [2024-11-27 14:10:47.174693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:16.243 [2024-11-27 14:10:47.174843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:16.243 [2024-11-27 14:10:47.174855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:16.243 [2024-11-27 14:10:47.175043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 [2024-11-27 14:10:47.179655] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:16.243 [2024-11-27 14:10:47.179734] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:16.243 true 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.243 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:11:16.243 [2024-11-27 14:10:47.191852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.503 [2024-11-27 14:10:47.243567] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:16.503 [2024-11-27 14:10:47.243595] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:16.503 [2024-11-27 14:10:47.243627] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:11:16.503 true 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:11:16.503 [2024-11-27 14:10:47.255706] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60748 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60748 ']' 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60748 00:11:16.503 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:11:16.504 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.504 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60748 00:11:16.504 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.504 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.504 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60748' 00:11:16.504 killing process with pid 60748 00:11:16.504 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60748 00:11:16.504 [2024-11-27 14:10:47.343831] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.504 [2024-11-27 14:10:47.344005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.504 14:10:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60748 00:11:16.504 [2024-11-27 14:10:47.344134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.504 [2024-11-27 14:10:47.344201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:16.504 [2024-11-27 14:10:47.363867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.882 14:10:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:11:17.882 00:11:17.882 real 0m2.337s 00:11:17.882 user 0m2.509s 00:11:17.882 sys 0m0.317s 00:11:17.882 14:10:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.882 14:10:48 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.882 ************************************ 00:11:17.882 END TEST raid0_resize_test 00:11:17.882 ************************************ 00:11:17.882 14:10:48 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:11:17.882 14:10:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.882 14:10:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.882 14:10:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.882 ************************************ 00:11:17.882 START TEST raid1_resize_test 00:11:17.882 ************************************ 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60809 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60809' 00:11:17.882 Process raid pid: 60809 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60809 00:11:17.882 14:10:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60809 ']' 00:11:17.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.883 14:10:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.883 14:10:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.883 14:10:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.883 14:10:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.883 14:10:48 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.883 [2024-11-27 14:10:48.692599] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:17.883 [2024-11-27 14:10:48.692806] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.141 [2024-11-27 14:10:48.864941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.141 [2024-11-27 14:10:48.986805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.407 [2024-11-27 14:10:49.204899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.407 [2024-11-27 14:10:49.205043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.668 Base_1 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.668 Base_2 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.668 [2024-11-27 14:10:49.554629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:18.668 [2024-11-27 14:10:49.556360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:18.668 [2024-11-27 14:10:49.556418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:18.668 [2024-11-27 14:10:49.556430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:18.668 [2024-11-27 14:10:49.556668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:18.668 [2024-11-27 14:10:49.556795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:18.668 [2024-11-27 14:10:49.556803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:18.668 [2024-11-27 14:10:49.556927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.668 [2024-11-27 14:10:49.562606] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:18.668 [2024-11-27 14:10:49.562676] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:18.668 true 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.668 [2024-11-27 14:10:49.574733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.668 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.929 [2024-11-27 14:10:49.634532] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:18.929 [2024-11-27 14:10:49.634603] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:18.929 [2024-11-27 14:10:49.634663] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:11:18.929 true 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.929 [2024-11-27 14:10:49.650643] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60809 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60809 ']' 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60809 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60809 00:11:18.929 killing process with pid 60809 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60809' 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60809 00:11:18.929 [2024-11-27 14:10:49.735085] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.929 [2024-11-27 14:10:49.735181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.929 14:10:49 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60809 00:11:18.929 [2024-11-27 14:10:49.735676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.929 [2024-11-27 14:10:49.735702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:18.929 [2024-11-27 14:10:49.752741] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.309 ************************************ 00:11:20.309 END TEST raid1_resize_test 00:11:20.309 ************************************ 00:11:20.309 14:10:50 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:11:20.309 00:11:20.309 real 0m2.301s 00:11:20.309 user 0m2.439s 00:11:20.309 sys 0m0.348s 00:11:20.309 14:10:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.309 14:10:50 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.309 14:10:50 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:20.309 14:10:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:20.309 14:10:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:11:20.309 14:10:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:20.309 14:10:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.309 14:10:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.309 ************************************ 00:11:20.309 START TEST raid_state_function_test 00:11:20.309 ************************************ 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:20.309 Process raid pid: 60866 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60866 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60866' 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60866 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60866 ']' 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.309 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.309 [2024-11-27 14:10:51.059026] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:20.309 [2024-11-27 14:10:51.059286] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.309 [2024-11-27 14:10:51.218152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.568 [2024-11-27 14:10:51.347138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.828 [2024-11-27 14:10:51.559736] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.828 [2024-11-27 14:10:51.559874] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.088 [2024-11-27 14:10:51.936244] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.088 [2024-11-27 14:10:51.936377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.088 [2024-11-27 14:10:51.936395] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.088 [2024-11-27 14:10:51.936407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.088 "name": "Existed_Raid", 00:11:21.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.088 "strip_size_kb": 64, 00:11:21.088 "state": "configuring", 00:11:21.088 "raid_level": "raid0", 00:11:21.088 "superblock": false, 00:11:21.088 "num_base_bdevs": 2, 00:11:21.088 "num_base_bdevs_discovered": 0, 00:11:21.088 "num_base_bdevs_operational": 2, 00:11:21.088 "base_bdevs_list": [ 00:11:21.088 { 00:11:21.088 "name": "BaseBdev1", 00:11:21.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.088 "is_configured": false, 00:11:21.088 "data_offset": 0, 00:11:21.088 "data_size": 0 00:11:21.088 }, 00:11:21.088 { 00:11:21.088 "name": "BaseBdev2", 00:11:21.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.088 "is_configured": false, 00:11:21.088 "data_offset": 0, 00:11:21.088 "data_size": 0 00:11:21.088 } 00:11:21.088 ] 00:11:21.088 }' 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.088 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.658 [2024-11-27 14:10:52.375400] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.658 [2024-11-27 14:10:52.375494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.658 [2024-11-27 14:10:52.383363] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.658 [2024-11-27 14:10:52.383446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.658 [2024-11-27 14:10:52.383496] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.658 [2024-11-27 14:10:52.383525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.658 [2024-11-27 14:10:52.428197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.658 BaseBdev1 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.658 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.659 [ 00:11:21.659 { 00:11:21.659 "name": "BaseBdev1", 00:11:21.659 "aliases": [ 00:11:21.659 "70e452c5-e606-46ac-80c6-5f441c64208f" 00:11:21.659 ], 00:11:21.659 "product_name": "Malloc disk", 00:11:21.659 "block_size": 512, 00:11:21.659 "num_blocks": 65536, 00:11:21.659 "uuid": "70e452c5-e606-46ac-80c6-5f441c64208f", 00:11:21.659 "assigned_rate_limits": { 00:11:21.659 "rw_ios_per_sec": 0, 00:11:21.659 "rw_mbytes_per_sec": 0, 00:11:21.659 "r_mbytes_per_sec": 0, 00:11:21.659 "w_mbytes_per_sec": 0 00:11:21.659 }, 00:11:21.659 "claimed": true, 00:11:21.659 "claim_type": "exclusive_write", 00:11:21.659 "zoned": false, 00:11:21.659 "supported_io_types": { 00:11:21.659 "read": true, 00:11:21.659 "write": true, 00:11:21.659 "unmap": true, 00:11:21.659 "flush": true, 00:11:21.659 "reset": true, 00:11:21.659 "nvme_admin": false, 00:11:21.659 "nvme_io": false, 00:11:21.659 "nvme_io_md": false, 00:11:21.659 "write_zeroes": true, 00:11:21.659 "zcopy": true, 00:11:21.659 "get_zone_info": false, 00:11:21.659 "zone_management": false, 00:11:21.659 "zone_append": false, 00:11:21.659 "compare": false, 00:11:21.659 "compare_and_write": false, 00:11:21.659 "abort": true, 00:11:21.659 "seek_hole": false, 00:11:21.659 "seek_data": false, 00:11:21.659 "copy": true, 00:11:21.659 "nvme_iov_md": false 00:11:21.659 }, 00:11:21.659 "memory_domains": [ 00:11:21.659 { 00:11:21.659 "dma_device_id": "system", 00:11:21.659 "dma_device_type": 1 00:11:21.659 }, 00:11:21.659 { 00:11:21.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.659 "dma_device_type": 2 00:11:21.659 } 00:11:21.659 ], 00:11:21.659 "driver_specific": {} 00:11:21.659 } 00:11:21.659 ] 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.659 "name": "Existed_Raid", 00:11:21.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.659 "strip_size_kb": 64, 00:11:21.659 "state": "configuring", 00:11:21.659 "raid_level": "raid0", 00:11:21.659 "superblock": false, 00:11:21.659 "num_base_bdevs": 2, 00:11:21.659 "num_base_bdevs_discovered": 1, 00:11:21.659 "num_base_bdevs_operational": 2, 00:11:21.659 "base_bdevs_list": [ 00:11:21.659 { 00:11:21.659 "name": "BaseBdev1", 00:11:21.659 "uuid": "70e452c5-e606-46ac-80c6-5f441c64208f", 00:11:21.659 "is_configured": true, 00:11:21.659 "data_offset": 0, 00:11:21.659 "data_size": 65536 00:11:21.659 }, 00:11:21.659 { 00:11:21.659 "name": "BaseBdev2", 00:11:21.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.659 "is_configured": false, 00:11:21.659 "data_offset": 0, 00:11:21.659 "data_size": 0 00:11:21.659 } 00:11:21.659 ] 00:11:21.659 }' 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.659 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.228 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.228 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.228 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.228 [2024-11-27 14:10:52.987321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.228 [2024-11-27 14:10:52.987383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:22.228 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.228 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:22.228 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.228 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.228 [2024-11-27 14:10:52.999324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.228 [2024-11-27 14:10:53.001293] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.228 [2024-11-27 14:10:53.001375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.228 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.228 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:22.228 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.228 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:22.228 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.228 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.228 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.228 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.228 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:22.228 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.229 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.229 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.229 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.229 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.229 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.229 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.229 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.229 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.229 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.229 "name": "Existed_Raid", 00:11:22.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.229 "strip_size_kb": 64, 00:11:22.229 "state": "configuring", 00:11:22.229 "raid_level": "raid0", 00:11:22.229 "superblock": false, 00:11:22.229 "num_base_bdevs": 2, 00:11:22.229 "num_base_bdevs_discovered": 1, 00:11:22.229 "num_base_bdevs_operational": 2, 00:11:22.229 "base_bdevs_list": [ 00:11:22.229 { 00:11:22.229 "name": "BaseBdev1", 00:11:22.229 "uuid": "70e452c5-e606-46ac-80c6-5f441c64208f", 00:11:22.229 "is_configured": true, 00:11:22.229 "data_offset": 0, 00:11:22.229 "data_size": 65536 00:11:22.229 }, 00:11:22.229 { 00:11:22.229 "name": "BaseBdev2", 00:11:22.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.229 "is_configured": false, 00:11:22.229 "data_offset": 0, 00:11:22.229 "data_size": 0 00:11:22.229 } 00:11:22.229 ] 00:11:22.229 }' 00:11:22.229 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.229 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.798 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:22.798 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.798 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.798 [2024-11-27 14:10:53.500631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.798 [2024-11-27 14:10:53.500683] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:22.799 [2024-11-27 14:10:53.500693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:22.799 [2024-11-27 14:10:53.500959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:22.799 [2024-11-27 14:10:53.501158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:22.799 [2024-11-27 14:10:53.501174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:22.799 [2024-11-27 14:10:53.501462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.799 BaseBdev2 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.799 [ 00:11:22.799 { 00:11:22.799 "name": "BaseBdev2", 00:11:22.799 "aliases": [ 00:11:22.799 "084ddce4-bba9-4a80-b07f-05d24106aee6" 00:11:22.799 ], 00:11:22.799 "product_name": "Malloc disk", 00:11:22.799 "block_size": 512, 00:11:22.799 "num_blocks": 65536, 00:11:22.799 "uuid": "084ddce4-bba9-4a80-b07f-05d24106aee6", 00:11:22.799 "assigned_rate_limits": { 00:11:22.799 "rw_ios_per_sec": 0, 00:11:22.799 "rw_mbytes_per_sec": 0, 00:11:22.799 "r_mbytes_per_sec": 0, 00:11:22.799 "w_mbytes_per_sec": 0 00:11:22.799 }, 00:11:22.799 "claimed": true, 00:11:22.799 "claim_type": "exclusive_write", 00:11:22.799 "zoned": false, 00:11:22.799 "supported_io_types": { 00:11:22.799 "read": true, 00:11:22.799 "write": true, 00:11:22.799 "unmap": true, 00:11:22.799 "flush": true, 00:11:22.799 "reset": true, 00:11:22.799 "nvme_admin": false, 00:11:22.799 "nvme_io": false, 00:11:22.799 "nvme_io_md": false, 00:11:22.799 "write_zeroes": true, 00:11:22.799 "zcopy": true, 00:11:22.799 "get_zone_info": false, 00:11:22.799 "zone_management": false, 00:11:22.799 "zone_append": false, 00:11:22.799 "compare": false, 00:11:22.799 "compare_and_write": false, 00:11:22.799 "abort": true, 00:11:22.799 "seek_hole": false, 00:11:22.799 "seek_data": false, 00:11:22.799 "copy": true, 00:11:22.799 "nvme_iov_md": false 00:11:22.799 }, 00:11:22.799 "memory_domains": [ 00:11:22.799 { 00:11:22.799 "dma_device_id": "system", 00:11:22.799 "dma_device_type": 1 00:11:22.799 }, 00:11:22.799 { 00:11:22.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.799 "dma_device_type": 2 00:11:22.799 } 00:11:22.799 ], 00:11:22.799 "driver_specific": {} 00:11:22.799 } 00:11:22.799 ] 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.799 "name": "Existed_Raid", 00:11:22.799 "uuid": "1e1abede-9472-4444-a5a7-65fad05f30e7", 00:11:22.799 "strip_size_kb": 64, 00:11:22.799 "state": "online", 00:11:22.799 "raid_level": "raid0", 00:11:22.799 "superblock": false, 00:11:22.799 "num_base_bdevs": 2, 00:11:22.799 "num_base_bdevs_discovered": 2, 00:11:22.799 "num_base_bdevs_operational": 2, 00:11:22.799 "base_bdevs_list": [ 00:11:22.799 { 00:11:22.799 "name": "BaseBdev1", 00:11:22.799 "uuid": "70e452c5-e606-46ac-80c6-5f441c64208f", 00:11:22.799 "is_configured": true, 00:11:22.799 "data_offset": 0, 00:11:22.799 "data_size": 65536 00:11:22.799 }, 00:11:22.799 { 00:11:22.799 "name": "BaseBdev2", 00:11:22.799 "uuid": "084ddce4-bba9-4a80-b07f-05d24106aee6", 00:11:22.799 "is_configured": true, 00:11:22.799 "data_offset": 0, 00:11:22.799 "data_size": 65536 00:11:22.799 } 00:11:22.799 ] 00:11:22.799 }' 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.799 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.059 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:23.059 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:23.059 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.059 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.059 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.059 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.059 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:23.059 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.059 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.059 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.059 [2024-11-27 14:10:53.996263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.319 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.319 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.319 "name": "Existed_Raid", 00:11:23.319 "aliases": [ 00:11:23.319 "1e1abede-9472-4444-a5a7-65fad05f30e7" 00:11:23.319 ], 00:11:23.319 "product_name": "Raid Volume", 00:11:23.320 "block_size": 512, 00:11:23.320 "num_blocks": 131072, 00:11:23.320 "uuid": "1e1abede-9472-4444-a5a7-65fad05f30e7", 00:11:23.320 "assigned_rate_limits": { 00:11:23.320 "rw_ios_per_sec": 0, 00:11:23.320 "rw_mbytes_per_sec": 0, 00:11:23.320 "r_mbytes_per_sec": 0, 00:11:23.320 "w_mbytes_per_sec": 0 00:11:23.320 }, 00:11:23.320 "claimed": false, 00:11:23.320 "zoned": false, 00:11:23.320 "supported_io_types": { 00:11:23.320 "read": true, 00:11:23.320 "write": true, 00:11:23.320 "unmap": true, 00:11:23.320 "flush": true, 00:11:23.320 "reset": true, 00:11:23.320 "nvme_admin": false, 00:11:23.320 "nvme_io": false, 00:11:23.320 "nvme_io_md": false, 00:11:23.320 "write_zeroes": true, 00:11:23.320 "zcopy": false, 00:11:23.320 "get_zone_info": false, 00:11:23.320 "zone_management": false, 00:11:23.320 "zone_append": false, 00:11:23.320 "compare": false, 00:11:23.320 "compare_and_write": false, 00:11:23.320 "abort": false, 00:11:23.320 "seek_hole": false, 00:11:23.320 "seek_data": false, 00:11:23.320 "copy": false, 00:11:23.320 "nvme_iov_md": false 00:11:23.320 }, 00:11:23.320 "memory_domains": [ 00:11:23.320 { 00:11:23.320 "dma_device_id": "system", 00:11:23.320 "dma_device_type": 1 00:11:23.320 }, 00:11:23.320 { 00:11:23.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.320 "dma_device_type": 2 00:11:23.320 }, 00:11:23.320 { 00:11:23.320 "dma_device_id": "system", 00:11:23.320 "dma_device_type": 1 00:11:23.320 }, 00:11:23.320 { 00:11:23.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.320 "dma_device_type": 2 00:11:23.320 } 00:11:23.320 ], 00:11:23.320 "driver_specific": { 00:11:23.320 "raid": { 00:11:23.320 "uuid": "1e1abede-9472-4444-a5a7-65fad05f30e7", 00:11:23.320 "strip_size_kb": 64, 00:11:23.320 "state": "online", 00:11:23.320 "raid_level": "raid0", 00:11:23.320 "superblock": false, 00:11:23.320 "num_base_bdevs": 2, 00:11:23.320 "num_base_bdevs_discovered": 2, 00:11:23.320 "num_base_bdevs_operational": 2, 00:11:23.320 "base_bdevs_list": [ 00:11:23.320 { 00:11:23.320 "name": "BaseBdev1", 00:11:23.320 "uuid": "70e452c5-e606-46ac-80c6-5f441c64208f", 00:11:23.320 "is_configured": true, 00:11:23.320 "data_offset": 0, 00:11:23.320 "data_size": 65536 00:11:23.320 }, 00:11:23.320 { 00:11:23.320 "name": "BaseBdev2", 00:11:23.320 "uuid": "084ddce4-bba9-4a80-b07f-05d24106aee6", 00:11:23.320 "is_configured": true, 00:11:23.320 "data_offset": 0, 00:11:23.320 "data_size": 65536 00:11:23.320 } 00:11:23.320 ] 00:11:23.320 } 00:11:23.320 } 00:11:23.320 }' 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:23.320 BaseBdev2' 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.320 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.320 [2024-11-27 14:10:54.203610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.320 [2024-11-27 14:10:54.203689] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.320 [2024-11-27 14:10:54.203767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.579 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.580 "name": "Existed_Raid", 00:11:23.580 "uuid": "1e1abede-9472-4444-a5a7-65fad05f30e7", 00:11:23.580 "strip_size_kb": 64, 00:11:23.580 "state": "offline", 00:11:23.580 "raid_level": "raid0", 00:11:23.580 "superblock": false, 00:11:23.580 "num_base_bdevs": 2, 00:11:23.580 "num_base_bdevs_discovered": 1, 00:11:23.580 "num_base_bdevs_operational": 1, 00:11:23.580 "base_bdevs_list": [ 00:11:23.580 { 00:11:23.580 "name": null, 00:11:23.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.580 "is_configured": false, 00:11:23.580 "data_offset": 0, 00:11:23.580 "data_size": 65536 00:11:23.580 }, 00:11:23.580 { 00:11:23.580 "name": "BaseBdev2", 00:11:23.580 "uuid": "084ddce4-bba9-4a80-b07f-05d24106aee6", 00:11:23.580 "is_configured": true, 00:11:23.580 "data_offset": 0, 00:11:23.580 "data_size": 65536 00:11:23.580 } 00:11:23.580 ] 00:11:23.580 }' 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.580 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.838 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.838 [2024-11-27 14:10:54.757901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:23.838 [2024-11-27 14:10:54.758029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60866 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60866 ']' 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60866 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60866 00:11:24.096 killing process with pid 60866 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60866' 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60866 00:11:24.096 [2024-11-27 14:10:54.950354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.096 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60866 00:11:24.096 [2024-11-27 14:10:54.969980] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.481 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:25.481 00:11:25.481 real 0m5.166s 00:11:25.481 user 0m7.499s 00:11:25.481 sys 0m0.821s 00:11:25.481 ************************************ 00:11:25.481 END TEST raid_state_function_test 00:11:25.481 ************************************ 00:11:25.481 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.481 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 14:10:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:11:25.481 14:10:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:25.481 14:10:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.481 14:10:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 ************************************ 00:11:25.481 START TEST raid_state_function_test_sb 00:11:25.481 ************************************ 00:11:25.481 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:11:25.481 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:25.481 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:25.481 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:25.481 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61119 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61119' 00:11:25.482 Process raid pid: 61119 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61119 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61119 ']' 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.482 14:10:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.482 [2024-11-27 14:10:56.302012] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:25.482 [2024-11-27 14:10:56.302257] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.740 [2024-11-27 14:10:56.465651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.740 [2024-11-27 14:10:56.586352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.000 [2024-11-27 14:10:56.804287] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.000 [2024-11-27 14:10:56.804413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.260 [2024-11-27 14:10:57.150003] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.260 [2024-11-27 14:10:57.150135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.260 [2024-11-27 14:10:57.150152] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.260 [2024-11-27 14:10:57.150163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.260 "name": "Existed_Raid", 00:11:26.260 "uuid": "bd6f6028-1dad-40d2-aab6-78ecfd9855ef", 00:11:26.260 "strip_size_kb": 64, 00:11:26.260 "state": "configuring", 00:11:26.260 "raid_level": "raid0", 00:11:26.260 "superblock": true, 00:11:26.260 "num_base_bdevs": 2, 00:11:26.260 "num_base_bdevs_discovered": 0, 00:11:26.260 "num_base_bdevs_operational": 2, 00:11:26.260 "base_bdevs_list": [ 00:11:26.260 { 00:11:26.260 "name": "BaseBdev1", 00:11:26.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.260 "is_configured": false, 00:11:26.260 "data_offset": 0, 00:11:26.260 "data_size": 0 00:11:26.260 }, 00:11:26.260 { 00:11:26.260 "name": "BaseBdev2", 00:11:26.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.260 "is_configured": false, 00:11:26.260 "data_offset": 0, 00:11:26.260 "data_size": 0 00:11:26.260 } 00:11:26.260 ] 00:11:26.260 }' 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.260 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.829 [2024-11-27 14:10:57.613211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.829 [2024-11-27 14:10:57.613332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.829 [2024-11-27 14:10:57.625206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.829 [2024-11-27 14:10:57.625306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.829 [2024-11-27 14:10:57.625340] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.829 [2024-11-27 14:10:57.625370] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.829 [2024-11-27 14:10:57.676915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.829 BaseBdev1 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:26.829 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.830 [ 00:11:26.830 { 00:11:26.830 "name": "BaseBdev1", 00:11:26.830 "aliases": [ 00:11:26.830 "5e32271e-73a5-4130-9b95-8ea7c4890268" 00:11:26.830 ], 00:11:26.830 "product_name": "Malloc disk", 00:11:26.830 "block_size": 512, 00:11:26.830 "num_blocks": 65536, 00:11:26.830 "uuid": "5e32271e-73a5-4130-9b95-8ea7c4890268", 00:11:26.830 "assigned_rate_limits": { 00:11:26.830 "rw_ios_per_sec": 0, 00:11:26.830 "rw_mbytes_per_sec": 0, 00:11:26.830 "r_mbytes_per_sec": 0, 00:11:26.830 "w_mbytes_per_sec": 0 00:11:26.830 }, 00:11:26.830 "claimed": true, 00:11:26.830 "claim_type": "exclusive_write", 00:11:26.830 "zoned": false, 00:11:26.830 "supported_io_types": { 00:11:26.830 "read": true, 00:11:26.830 "write": true, 00:11:26.830 "unmap": true, 00:11:26.830 "flush": true, 00:11:26.830 "reset": true, 00:11:26.830 "nvme_admin": false, 00:11:26.830 "nvme_io": false, 00:11:26.830 "nvme_io_md": false, 00:11:26.830 "write_zeroes": true, 00:11:26.830 "zcopy": true, 00:11:26.830 "get_zone_info": false, 00:11:26.830 "zone_management": false, 00:11:26.830 "zone_append": false, 00:11:26.830 "compare": false, 00:11:26.830 "compare_and_write": false, 00:11:26.830 "abort": true, 00:11:26.830 "seek_hole": false, 00:11:26.830 "seek_data": false, 00:11:26.830 "copy": true, 00:11:26.830 "nvme_iov_md": false 00:11:26.830 }, 00:11:26.830 "memory_domains": [ 00:11:26.830 { 00:11:26.830 "dma_device_id": "system", 00:11:26.830 "dma_device_type": 1 00:11:26.830 }, 00:11:26.830 { 00:11:26.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.830 "dma_device_type": 2 00:11:26.830 } 00:11:26.830 ], 00:11:26.830 "driver_specific": {} 00:11:26.830 } 00:11:26.830 ] 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.830 "name": "Existed_Raid", 00:11:26.830 "uuid": "955941b3-1dd7-4a2b-859f-ad841ff40c3d", 00:11:26.830 "strip_size_kb": 64, 00:11:26.830 "state": "configuring", 00:11:26.830 "raid_level": "raid0", 00:11:26.830 "superblock": true, 00:11:26.830 "num_base_bdevs": 2, 00:11:26.830 "num_base_bdevs_discovered": 1, 00:11:26.830 "num_base_bdevs_operational": 2, 00:11:26.830 "base_bdevs_list": [ 00:11:26.830 { 00:11:26.830 "name": "BaseBdev1", 00:11:26.830 "uuid": "5e32271e-73a5-4130-9b95-8ea7c4890268", 00:11:26.830 "is_configured": true, 00:11:26.830 "data_offset": 2048, 00:11:26.830 "data_size": 63488 00:11:26.830 }, 00:11:26.830 { 00:11:26.830 "name": "BaseBdev2", 00:11:26.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.830 "is_configured": false, 00:11:26.830 "data_offset": 0, 00:11:26.830 "data_size": 0 00:11:26.830 } 00:11:26.830 ] 00:11:26.830 }' 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.830 14:10:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.397 [2024-11-27 14:10:58.140256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.397 [2024-11-27 14:10:58.140313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.397 [2024-11-27 14:10:58.152310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.397 [2024-11-27 14:10:58.154286] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.397 [2024-11-27 14:10:58.154360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.397 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.397 "name": "Existed_Raid", 00:11:27.398 "uuid": "bc01a746-bbe7-4fd5-beb8-3b40ba55b395", 00:11:27.398 "strip_size_kb": 64, 00:11:27.398 "state": "configuring", 00:11:27.398 "raid_level": "raid0", 00:11:27.398 "superblock": true, 00:11:27.398 "num_base_bdevs": 2, 00:11:27.398 "num_base_bdevs_discovered": 1, 00:11:27.398 "num_base_bdevs_operational": 2, 00:11:27.398 "base_bdevs_list": [ 00:11:27.398 { 00:11:27.398 "name": "BaseBdev1", 00:11:27.398 "uuid": "5e32271e-73a5-4130-9b95-8ea7c4890268", 00:11:27.398 "is_configured": true, 00:11:27.398 "data_offset": 2048, 00:11:27.398 "data_size": 63488 00:11:27.398 }, 00:11:27.398 { 00:11:27.398 "name": "BaseBdev2", 00:11:27.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.398 "is_configured": false, 00:11:27.398 "data_offset": 0, 00:11:27.398 "data_size": 0 00:11:27.398 } 00:11:27.398 ] 00:11:27.398 }' 00:11:27.398 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.398 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.657 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.657 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.657 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.917 [2024-11-27 14:10:58.631870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.917 [2024-11-27 14:10:58.632201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:27.917 [2024-11-27 14:10:58.632222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:27.917 [2024-11-27 14:10:58.632529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:27.917 BaseBdev2 00:11:27.917 [2024-11-27 14:10:58.632720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:27.917 [2024-11-27 14:10:58.632736] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:27.917 [2024-11-27 14:10:58.632911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.917 [ 00:11:27.917 { 00:11:27.917 "name": "BaseBdev2", 00:11:27.917 "aliases": [ 00:11:27.917 "2f146c7d-f790-419d-b782-4dcbf2c6b90e" 00:11:27.917 ], 00:11:27.917 "product_name": "Malloc disk", 00:11:27.917 "block_size": 512, 00:11:27.917 "num_blocks": 65536, 00:11:27.917 "uuid": "2f146c7d-f790-419d-b782-4dcbf2c6b90e", 00:11:27.917 "assigned_rate_limits": { 00:11:27.917 "rw_ios_per_sec": 0, 00:11:27.917 "rw_mbytes_per_sec": 0, 00:11:27.917 "r_mbytes_per_sec": 0, 00:11:27.917 "w_mbytes_per_sec": 0 00:11:27.917 }, 00:11:27.917 "claimed": true, 00:11:27.917 "claim_type": "exclusive_write", 00:11:27.917 "zoned": false, 00:11:27.917 "supported_io_types": { 00:11:27.917 "read": true, 00:11:27.917 "write": true, 00:11:27.917 "unmap": true, 00:11:27.917 "flush": true, 00:11:27.917 "reset": true, 00:11:27.917 "nvme_admin": false, 00:11:27.917 "nvme_io": false, 00:11:27.917 "nvme_io_md": false, 00:11:27.917 "write_zeroes": true, 00:11:27.917 "zcopy": true, 00:11:27.917 "get_zone_info": false, 00:11:27.917 "zone_management": false, 00:11:27.917 "zone_append": false, 00:11:27.917 "compare": false, 00:11:27.917 "compare_and_write": false, 00:11:27.917 "abort": true, 00:11:27.917 "seek_hole": false, 00:11:27.917 "seek_data": false, 00:11:27.917 "copy": true, 00:11:27.917 "nvme_iov_md": false 00:11:27.917 }, 00:11:27.917 "memory_domains": [ 00:11:27.917 { 00:11:27.917 "dma_device_id": "system", 00:11:27.917 "dma_device_type": 1 00:11:27.917 }, 00:11:27.917 { 00:11:27.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.917 "dma_device_type": 2 00:11:27.917 } 00:11:27.917 ], 00:11:27.917 "driver_specific": {} 00:11:27.917 } 00:11:27.917 ] 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.917 "name": "Existed_Raid", 00:11:27.917 "uuid": "bc01a746-bbe7-4fd5-beb8-3b40ba55b395", 00:11:27.917 "strip_size_kb": 64, 00:11:27.917 "state": "online", 00:11:27.917 "raid_level": "raid0", 00:11:27.917 "superblock": true, 00:11:27.917 "num_base_bdevs": 2, 00:11:27.917 "num_base_bdevs_discovered": 2, 00:11:27.917 "num_base_bdevs_operational": 2, 00:11:27.917 "base_bdevs_list": [ 00:11:27.917 { 00:11:27.917 "name": "BaseBdev1", 00:11:27.917 "uuid": "5e32271e-73a5-4130-9b95-8ea7c4890268", 00:11:27.917 "is_configured": true, 00:11:27.917 "data_offset": 2048, 00:11:27.917 "data_size": 63488 00:11:27.917 }, 00:11:27.917 { 00:11:27.917 "name": "BaseBdev2", 00:11:27.917 "uuid": "2f146c7d-f790-419d-b782-4dcbf2c6b90e", 00:11:27.917 "is_configured": true, 00:11:27.917 "data_offset": 2048, 00:11:27.917 "data_size": 63488 00:11:27.917 } 00:11:27.917 ] 00:11:27.917 }' 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.917 14:10:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.177 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:28.177 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:28.177 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.177 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.177 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.177 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.177 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.177 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:28.177 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.177 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.177 [2024-11-27 14:10:59.115391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.435 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.435 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.435 "name": "Existed_Raid", 00:11:28.435 "aliases": [ 00:11:28.435 "bc01a746-bbe7-4fd5-beb8-3b40ba55b395" 00:11:28.435 ], 00:11:28.435 "product_name": "Raid Volume", 00:11:28.435 "block_size": 512, 00:11:28.435 "num_blocks": 126976, 00:11:28.435 "uuid": "bc01a746-bbe7-4fd5-beb8-3b40ba55b395", 00:11:28.435 "assigned_rate_limits": { 00:11:28.435 "rw_ios_per_sec": 0, 00:11:28.435 "rw_mbytes_per_sec": 0, 00:11:28.435 "r_mbytes_per_sec": 0, 00:11:28.435 "w_mbytes_per_sec": 0 00:11:28.435 }, 00:11:28.435 "claimed": false, 00:11:28.436 "zoned": false, 00:11:28.436 "supported_io_types": { 00:11:28.436 "read": true, 00:11:28.436 "write": true, 00:11:28.436 "unmap": true, 00:11:28.436 "flush": true, 00:11:28.436 "reset": true, 00:11:28.436 "nvme_admin": false, 00:11:28.436 "nvme_io": false, 00:11:28.436 "nvme_io_md": false, 00:11:28.436 "write_zeroes": true, 00:11:28.436 "zcopy": false, 00:11:28.436 "get_zone_info": false, 00:11:28.436 "zone_management": false, 00:11:28.436 "zone_append": false, 00:11:28.436 "compare": false, 00:11:28.436 "compare_and_write": false, 00:11:28.436 "abort": false, 00:11:28.436 "seek_hole": false, 00:11:28.436 "seek_data": false, 00:11:28.436 "copy": false, 00:11:28.436 "nvme_iov_md": false 00:11:28.436 }, 00:11:28.436 "memory_domains": [ 00:11:28.436 { 00:11:28.436 "dma_device_id": "system", 00:11:28.436 "dma_device_type": 1 00:11:28.436 }, 00:11:28.436 { 00:11:28.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.436 "dma_device_type": 2 00:11:28.436 }, 00:11:28.436 { 00:11:28.436 "dma_device_id": "system", 00:11:28.436 "dma_device_type": 1 00:11:28.436 }, 00:11:28.436 { 00:11:28.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.436 "dma_device_type": 2 00:11:28.436 } 00:11:28.436 ], 00:11:28.436 "driver_specific": { 00:11:28.436 "raid": { 00:11:28.436 "uuid": "bc01a746-bbe7-4fd5-beb8-3b40ba55b395", 00:11:28.436 "strip_size_kb": 64, 00:11:28.436 "state": "online", 00:11:28.436 "raid_level": "raid0", 00:11:28.436 "superblock": true, 00:11:28.436 "num_base_bdevs": 2, 00:11:28.436 "num_base_bdevs_discovered": 2, 00:11:28.436 "num_base_bdevs_operational": 2, 00:11:28.436 "base_bdevs_list": [ 00:11:28.436 { 00:11:28.436 "name": "BaseBdev1", 00:11:28.436 "uuid": "5e32271e-73a5-4130-9b95-8ea7c4890268", 00:11:28.436 "is_configured": true, 00:11:28.436 "data_offset": 2048, 00:11:28.436 "data_size": 63488 00:11:28.436 }, 00:11:28.436 { 00:11:28.436 "name": "BaseBdev2", 00:11:28.436 "uuid": "2f146c7d-f790-419d-b782-4dcbf2c6b90e", 00:11:28.436 "is_configured": true, 00:11:28.436 "data_offset": 2048, 00:11:28.436 "data_size": 63488 00:11:28.436 } 00:11:28.436 ] 00:11:28.436 } 00:11:28.436 } 00:11:28.436 }' 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:28.436 BaseBdev2' 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.436 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.436 [2024-11-27 14:10:59.346769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:28.436 [2024-11-27 14:10:59.346854] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.436 [2024-11-27 14:10:59.346917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.694 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.694 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:28.694 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:28.694 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:28.694 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:28.694 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:28.694 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:28.694 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.694 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.695 "name": "Existed_Raid", 00:11:28.695 "uuid": "bc01a746-bbe7-4fd5-beb8-3b40ba55b395", 00:11:28.695 "strip_size_kb": 64, 00:11:28.695 "state": "offline", 00:11:28.695 "raid_level": "raid0", 00:11:28.695 "superblock": true, 00:11:28.695 "num_base_bdevs": 2, 00:11:28.695 "num_base_bdevs_discovered": 1, 00:11:28.695 "num_base_bdevs_operational": 1, 00:11:28.695 "base_bdevs_list": [ 00:11:28.695 { 00:11:28.695 "name": null, 00:11:28.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.695 "is_configured": false, 00:11:28.695 "data_offset": 0, 00:11:28.695 "data_size": 63488 00:11:28.695 }, 00:11:28.695 { 00:11:28.695 "name": "BaseBdev2", 00:11:28.695 "uuid": "2f146c7d-f790-419d-b782-4dcbf2c6b90e", 00:11:28.695 "is_configured": true, 00:11:28.695 "data_offset": 2048, 00:11:28.695 "data_size": 63488 00:11:28.695 } 00:11:28.695 ] 00:11:28.695 }' 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.695 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.262 14:10:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.262 [2024-11-27 14:10:59.996162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:29.262 [2024-11-27 14:10:59.996231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61119 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61119 ']' 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61119 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61119 00:11:29.262 killing process with pid 61119 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61119' 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61119 00:11:29.262 [2024-11-27 14:11:00.186989] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.262 14:11:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61119 00:11:29.262 [2024-11-27 14:11:00.204243] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.639 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:30.639 00:11:30.639 real 0m5.202s 00:11:30.639 user 0m7.489s 00:11:30.639 sys 0m0.803s 00:11:30.639 ************************************ 00:11:30.639 END TEST raid_state_function_test_sb 00:11:30.639 ************************************ 00:11:30.639 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.639 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.639 14:11:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:11:30.639 14:11:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:30.639 14:11:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.639 14:11:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.639 ************************************ 00:11:30.639 START TEST raid_superblock_test 00:11:30.639 ************************************ 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61371 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61371 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61371 ']' 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.639 14:11:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.639 [2024-11-27 14:11:01.553684] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:30.639 [2024-11-27 14:11:01.553885] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61371 ] 00:11:30.898 [2024-11-27 14:11:01.728985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.898 [2024-11-27 14:11:01.849257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.157 [2024-11-27 14:11:02.057800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.157 [2024-11-27 14:11:02.057848] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.774 malloc1 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.774 [2024-11-27 14:11:02.519210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:31.774 [2024-11-27 14:11:02.519272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.774 [2024-11-27 14:11:02.519295] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:31.774 [2024-11-27 14:11:02.519304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.774 [2024-11-27 14:11:02.521538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.774 [2024-11-27 14:11:02.521617] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:31.774 pt1 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.774 malloc2 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.774 [2024-11-27 14:11:02.575353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:31.774 [2024-11-27 14:11:02.575416] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.774 [2024-11-27 14:11:02.575443] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:31.774 [2024-11-27 14:11:02.575452] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.774 [2024-11-27 14:11:02.577728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.774 [2024-11-27 14:11:02.577816] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:31.774 pt2 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.774 [2024-11-27 14:11:02.587390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:31.774 [2024-11-27 14:11:02.589337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:31.774 [2024-11-27 14:11:02.589523] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:31.774 [2024-11-27 14:11:02.589538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:31.774 [2024-11-27 14:11:02.589819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:31.774 [2024-11-27 14:11:02.589965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:31.774 [2024-11-27 14:11:02.589976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:31.774 [2024-11-27 14:11:02.590167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.774 "name": "raid_bdev1", 00:11:31.774 "uuid": "7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60", 00:11:31.774 "strip_size_kb": 64, 00:11:31.774 "state": "online", 00:11:31.774 "raid_level": "raid0", 00:11:31.774 "superblock": true, 00:11:31.774 "num_base_bdevs": 2, 00:11:31.774 "num_base_bdevs_discovered": 2, 00:11:31.774 "num_base_bdevs_operational": 2, 00:11:31.774 "base_bdevs_list": [ 00:11:31.774 { 00:11:31.774 "name": "pt1", 00:11:31.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:31.774 "is_configured": true, 00:11:31.774 "data_offset": 2048, 00:11:31.774 "data_size": 63488 00:11:31.774 }, 00:11:31.774 { 00:11:31.774 "name": "pt2", 00:11:31.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:31.774 "is_configured": true, 00:11:31.774 "data_offset": 2048, 00:11:31.774 "data_size": 63488 00:11:31.774 } 00:11:31.774 ] 00:11:31.774 }' 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.774 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.341 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:32.341 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:32.341 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.341 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.341 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.341 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.342 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:32.342 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.342 14:11:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.342 14:11:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.342 [2024-11-27 14:11:03.003000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.342 "name": "raid_bdev1", 00:11:32.342 "aliases": [ 00:11:32.342 "7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60" 00:11:32.342 ], 00:11:32.342 "product_name": "Raid Volume", 00:11:32.342 "block_size": 512, 00:11:32.342 "num_blocks": 126976, 00:11:32.342 "uuid": "7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60", 00:11:32.342 "assigned_rate_limits": { 00:11:32.342 "rw_ios_per_sec": 0, 00:11:32.342 "rw_mbytes_per_sec": 0, 00:11:32.342 "r_mbytes_per_sec": 0, 00:11:32.342 "w_mbytes_per_sec": 0 00:11:32.342 }, 00:11:32.342 "claimed": false, 00:11:32.342 "zoned": false, 00:11:32.342 "supported_io_types": { 00:11:32.342 "read": true, 00:11:32.342 "write": true, 00:11:32.342 "unmap": true, 00:11:32.342 "flush": true, 00:11:32.342 "reset": true, 00:11:32.342 "nvme_admin": false, 00:11:32.342 "nvme_io": false, 00:11:32.342 "nvme_io_md": false, 00:11:32.342 "write_zeroes": true, 00:11:32.342 "zcopy": false, 00:11:32.342 "get_zone_info": false, 00:11:32.342 "zone_management": false, 00:11:32.342 "zone_append": false, 00:11:32.342 "compare": false, 00:11:32.342 "compare_and_write": false, 00:11:32.342 "abort": false, 00:11:32.342 "seek_hole": false, 00:11:32.342 "seek_data": false, 00:11:32.342 "copy": false, 00:11:32.342 "nvme_iov_md": false 00:11:32.342 }, 00:11:32.342 "memory_domains": [ 00:11:32.342 { 00:11:32.342 "dma_device_id": "system", 00:11:32.342 "dma_device_type": 1 00:11:32.342 }, 00:11:32.342 { 00:11:32.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.342 "dma_device_type": 2 00:11:32.342 }, 00:11:32.342 { 00:11:32.342 "dma_device_id": "system", 00:11:32.342 "dma_device_type": 1 00:11:32.342 }, 00:11:32.342 { 00:11:32.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.342 "dma_device_type": 2 00:11:32.342 } 00:11:32.342 ], 00:11:32.342 "driver_specific": { 00:11:32.342 "raid": { 00:11:32.342 "uuid": "7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60", 00:11:32.342 "strip_size_kb": 64, 00:11:32.342 "state": "online", 00:11:32.342 "raid_level": "raid0", 00:11:32.342 "superblock": true, 00:11:32.342 "num_base_bdevs": 2, 00:11:32.342 "num_base_bdevs_discovered": 2, 00:11:32.342 "num_base_bdevs_operational": 2, 00:11:32.342 "base_bdevs_list": [ 00:11:32.342 { 00:11:32.342 "name": "pt1", 00:11:32.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.342 "is_configured": true, 00:11:32.342 "data_offset": 2048, 00:11:32.342 "data_size": 63488 00:11:32.342 }, 00:11:32.342 { 00:11:32.342 "name": "pt2", 00:11:32.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.342 "is_configured": true, 00:11:32.342 "data_offset": 2048, 00:11:32.342 "data_size": 63488 00:11:32.342 } 00:11:32.342 ] 00:11:32.342 } 00:11:32.342 } 00:11:32.342 }' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:32.342 pt2' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.342 [2024-11-27 14:11:03.210637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60 ']' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.342 [2024-11-27 14:11:03.258243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.342 [2024-11-27 14:11:03.258274] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.342 [2024-11-27 14:11:03.258376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.342 [2024-11-27 14:11:03.258430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.342 [2024-11-27 14:11:03.258444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.342 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.600 [2024-11-27 14:11:03.390051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:32.600 [2024-11-27 14:11:03.392068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:32.600 [2024-11-27 14:11:03.392189] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:32.600 [2024-11-27 14:11:03.392262] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:32.600 [2024-11-27 14:11:03.392281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.600 [2024-11-27 14:11:03.392295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:32.600 request: 00:11:32.600 { 00:11:32.600 "name": "raid_bdev1", 00:11:32.600 "raid_level": "raid0", 00:11:32.600 "base_bdevs": [ 00:11:32.600 "malloc1", 00:11:32.600 "malloc2" 00:11:32.600 ], 00:11:32.600 "strip_size_kb": 64, 00:11:32.600 "superblock": false, 00:11:32.600 "method": "bdev_raid_create", 00:11:32.600 "req_id": 1 00:11:32.600 } 00:11:32.600 Got JSON-RPC error response 00:11:32.600 response: 00:11:32.600 { 00:11:32.600 "code": -17, 00:11:32.600 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:32.600 } 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:32.600 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.601 [2024-11-27 14:11:03.441949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:32.601 [2024-11-27 14:11:03.442073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.601 [2024-11-27 14:11:03.442148] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:32.601 [2024-11-27 14:11:03.442201] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.601 [2024-11-27 14:11:03.444754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.601 [2024-11-27 14:11:03.444855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:32.601 [2024-11-27 14:11:03.444996] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:32.601 [2024-11-27 14:11:03.445132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:32.601 pt1 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.601 "name": "raid_bdev1", 00:11:32.601 "uuid": "7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60", 00:11:32.601 "strip_size_kb": 64, 00:11:32.601 "state": "configuring", 00:11:32.601 "raid_level": "raid0", 00:11:32.601 "superblock": true, 00:11:32.601 "num_base_bdevs": 2, 00:11:32.601 "num_base_bdevs_discovered": 1, 00:11:32.601 "num_base_bdevs_operational": 2, 00:11:32.601 "base_bdevs_list": [ 00:11:32.601 { 00:11:32.601 "name": "pt1", 00:11:32.601 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.601 "is_configured": true, 00:11:32.601 "data_offset": 2048, 00:11:32.601 "data_size": 63488 00:11:32.601 }, 00:11:32.601 { 00:11:32.601 "name": null, 00:11:32.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.601 "is_configured": false, 00:11:32.601 "data_offset": 2048, 00:11:32.601 "data_size": 63488 00:11:32.601 } 00:11:32.601 ] 00:11:32.601 }' 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.601 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.169 [2024-11-27 14:11:03.917198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.169 [2024-11-27 14:11:03.917292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.169 [2024-11-27 14:11:03.917318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:33.169 [2024-11-27 14:11:03.917331] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.169 [2024-11-27 14:11:03.917840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.169 [2024-11-27 14:11:03.917860] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.169 [2024-11-27 14:11:03.917947] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:33.169 [2024-11-27 14:11:03.917976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.169 [2024-11-27 14:11:03.918113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:33.169 [2024-11-27 14:11:03.918124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:33.169 [2024-11-27 14:11:03.918416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:33.169 [2024-11-27 14:11:03.918595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:33.169 [2024-11-27 14:11:03.918611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:33.169 [2024-11-27 14:11:03.918783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.169 pt2 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.169 "name": "raid_bdev1", 00:11:33.169 "uuid": "7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60", 00:11:33.169 "strip_size_kb": 64, 00:11:33.169 "state": "online", 00:11:33.169 "raid_level": "raid0", 00:11:33.169 "superblock": true, 00:11:33.169 "num_base_bdevs": 2, 00:11:33.169 "num_base_bdevs_discovered": 2, 00:11:33.169 "num_base_bdevs_operational": 2, 00:11:33.169 "base_bdevs_list": [ 00:11:33.169 { 00:11:33.169 "name": "pt1", 00:11:33.169 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.169 "is_configured": true, 00:11:33.169 "data_offset": 2048, 00:11:33.169 "data_size": 63488 00:11:33.169 }, 00:11:33.169 { 00:11:33.169 "name": "pt2", 00:11:33.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.169 "is_configured": true, 00:11:33.169 "data_offset": 2048, 00:11:33.169 "data_size": 63488 00:11:33.169 } 00:11:33.169 ] 00:11:33.169 }' 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.169 14:11:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.737 [2024-11-27 14:11:04.404612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.737 "name": "raid_bdev1", 00:11:33.737 "aliases": [ 00:11:33.737 "7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60" 00:11:33.737 ], 00:11:33.737 "product_name": "Raid Volume", 00:11:33.737 "block_size": 512, 00:11:33.737 "num_blocks": 126976, 00:11:33.737 "uuid": "7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60", 00:11:33.737 "assigned_rate_limits": { 00:11:33.737 "rw_ios_per_sec": 0, 00:11:33.737 "rw_mbytes_per_sec": 0, 00:11:33.737 "r_mbytes_per_sec": 0, 00:11:33.737 "w_mbytes_per_sec": 0 00:11:33.737 }, 00:11:33.737 "claimed": false, 00:11:33.737 "zoned": false, 00:11:33.737 "supported_io_types": { 00:11:33.737 "read": true, 00:11:33.737 "write": true, 00:11:33.737 "unmap": true, 00:11:33.737 "flush": true, 00:11:33.737 "reset": true, 00:11:33.737 "nvme_admin": false, 00:11:33.737 "nvme_io": false, 00:11:33.737 "nvme_io_md": false, 00:11:33.737 "write_zeroes": true, 00:11:33.737 "zcopy": false, 00:11:33.737 "get_zone_info": false, 00:11:33.737 "zone_management": false, 00:11:33.737 "zone_append": false, 00:11:33.737 "compare": false, 00:11:33.737 "compare_and_write": false, 00:11:33.737 "abort": false, 00:11:33.737 "seek_hole": false, 00:11:33.737 "seek_data": false, 00:11:33.737 "copy": false, 00:11:33.737 "nvme_iov_md": false 00:11:33.737 }, 00:11:33.737 "memory_domains": [ 00:11:33.737 { 00:11:33.737 "dma_device_id": "system", 00:11:33.737 "dma_device_type": 1 00:11:33.737 }, 00:11:33.737 { 00:11:33.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.737 "dma_device_type": 2 00:11:33.737 }, 00:11:33.737 { 00:11:33.737 "dma_device_id": "system", 00:11:33.737 "dma_device_type": 1 00:11:33.737 }, 00:11:33.737 { 00:11:33.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.737 "dma_device_type": 2 00:11:33.737 } 00:11:33.737 ], 00:11:33.737 "driver_specific": { 00:11:33.737 "raid": { 00:11:33.737 "uuid": "7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60", 00:11:33.737 "strip_size_kb": 64, 00:11:33.737 "state": "online", 00:11:33.737 "raid_level": "raid0", 00:11:33.737 "superblock": true, 00:11:33.737 "num_base_bdevs": 2, 00:11:33.737 "num_base_bdevs_discovered": 2, 00:11:33.737 "num_base_bdevs_operational": 2, 00:11:33.737 "base_bdevs_list": [ 00:11:33.737 { 00:11:33.737 "name": "pt1", 00:11:33.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.737 "is_configured": true, 00:11:33.737 "data_offset": 2048, 00:11:33.737 "data_size": 63488 00:11:33.737 }, 00:11:33.737 { 00:11:33.737 "name": "pt2", 00:11:33.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.737 "is_configured": true, 00:11:33.737 "data_offset": 2048, 00:11:33.737 "data_size": 63488 00:11:33.737 } 00:11:33.737 ] 00:11:33.737 } 00:11:33.737 } 00:11:33.737 }' 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:33.737 pt2' 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.737 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:33.737 [2024-11-27 14:11:04.656270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.738 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60 '!=' 7c17b56b-ff3d-4a0e-9b2a-83477aa6ff60 ']' 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61371 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61371 ']' 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61371 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61371 00:11:33.997 killing process with pid 61371 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61371' 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61371 00:11:33.997 [2024-11-27 14:11:04.737828] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.997 14:11:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61371 00:11:33.997 [2024-11-27 14:11:04.737926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.997 [2024-11-27 14:11:04.737977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.997 [2024-11-27 14:11:04.737989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:34.255 [2024-11-27 14:11:04.953656] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.193 14:11:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:35.193 00:11:35.193 real 0m4.656s 00:11:35.193 user 0m6.534s 00:11:35.193 sys 0m0.771s 00:11:35.193 14:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.193 14:11:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.193 ************************************ 00:11:35.193 END TEST raid_superblock_test 00:11:35.193 ************************************ 00:11:35.452 14:11:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:11:35.453 14:11:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:35.453 14:11:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.453 14:11:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.453 ************************************ 00:11:35.453 START TEST raid_read_error_test 00:11:35.453 ************************************ 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aO63rqNpgL 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61583 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61583 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61583 ']' 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.453 14:11:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.453 [2024-11-27 14:11:06.301979] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:35.453 [2024-11-27 14:11:06.302591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61583 ] 00:11:35.713 [2024-11-27 14:11:06.459602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.713 [2024-11-27 14:11:06.585282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.972 [2024-11-27 14:11:06.802737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.972 [2024-11-27 14:11:06.802891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.540 BaseBdev1_malloc 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.540 true 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.540 [2024-11-27 14:11:07.254760] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:36.540 [2024-11-27 14:11:07.254821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.540 [2024-11-27 14:11:07.254843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:36.540 [2024-11-27 14:11:07.254854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.540 [2024-11-27 14:11:07.257230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.540 [2024-11-27 14:11:07.257341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.540 BaseBdev1 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.540 BaseBdev2_malloc 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.540 true 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.540 [2024-11-27 14:11:07.323527] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:36.540 [2024-11-27 14:11:07.323595] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.540 [2024-11-27 14:11:07.323614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:36.540 [2024-11-27 14:11:07.323625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.540 [2024-11-27 14:11:07.326012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.540 [2024-11-27 14:11:07.326057] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.540 BaseBdev2 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.540 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.540 [2024-11-27 14:11:07.335589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.540 [2024-11-27 14:11:07.337753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.541 [2024-11-27 14:11:07.338008] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:36.541 [2024-11-27 14:11:07.338027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:36.541 [2024-11-27 14:11:07.338348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:36.541 [2024-11-27 14:11:07.338549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:36.541 [2024-11-27 14:11:07.338566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:36.541 [2024-11-27 14:11:07.338773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.541 "name": "raid_bdev1", 00:11:36.541 "uuid": "d4d6a356-6959-4a9c-b663-52de2e2703c3", 00:11:36.541 "strip_size_kb": 64, 00:11:36.541 "state": "online", 00:11:36.541 "raid_level": "raid0", 00:11:36.541 "superblock": true, 00:11:36.541 "num_base_bdevs": 2, 00:11:36.541 "num_base_bdevs_discovered": 2, 00:11:36.541 "num_base_bdevs_operational": 2, 00:11:36.541 "base_bdevs_list": [ 00:11:36.541 { 00:11:36.541 "name": "BaseBdev1", 00:11:36.541 "uuid": "027b3a5a-83f3-5254-a48a-38af4b7fce65", 00:11:36.541 "is_configured": true, 00:11:36.541 "data_offset": 2048, 00:11:36.541 "data_size": 63488 00:11:36.541 }, 00:11:36.541 { 00:11:36.541 "name": "BaseBdev2", 00:11:36.541 "uuid": "e5d29014-7726-52a4-b6e6-3a7209b4a178", 00:11:36.541 "is_configured": true, 00:11:36.541 "data_offset": 2048, 00:11:36.541 "data_size": 63488 00:11:36.541 } 00:11:36.541 ] 00:11:36.541 }' 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.541 14:11:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.108 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:37.108 14:11:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:37.108 [2024-11-27 14:11:07.932045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.044 "name": "raid_bdev1", 00:11:38.044 "uuid": "d4d6a356-6959-4a9c-b663-52de2e2703c3", 00:11:38.044 "strip_size_kb": 64, 00:11:38.044 "state": "online", 00:11:38.044 "raid_level": "raid0", 00:11:38.044 "superblock": true, 00:11:38.044 "num_base_bdevs": 2, 00:11:38.044 "num_base_bdevs_discovered": 2, 00:11:38.044 "num_base_bdevs_operational": 2, 00:11:38.044 "base_bdevs_list": [ 00:11:38.044 { 00:11:38.044 "name": "BaseBdev1", 00:11:38.044 "uuid": "027b3a5a-83f3-5254-a48a-38af4b7fce65", 00:11:38.044 "is_configured": true, 00:11:38.044 "data_offset": 2048, 00:11:38.044 "data_size": 63488 00:11:38.044 }, 00:11:38.044 { 00:11:38.044 "name": "BaseBdev2", 00:11:38.044 "uuid": "e5d29014-7726-52a4-b6e6-3a7209b4a178", 00:11:38.044 "is_configured": true, 00:11:38.044 "data_offset": 2048, 00:11:38.044 "data_size": 63488 00:11:38.044 } 00:11:38.044 ] 00:11:38.044 }' 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.044 14:11:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.611 14:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.611 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.611 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.611 [2024-11-27 14:11:09.341449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.611 [2024-11-27 14:11:09.341586] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.612 [2024-11-27 14:11:09.344823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.612 [2024-11-27 14:11:09.344953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.612 [2024-11-27 14:11:09.345037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.612 [2024-11-27 14:11:09.345144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:38.612 { 00:11:38.612 "results": [ 00:11:38.612 { 00:11:38.612 "job": "raid_bdev1", 00:11:38.612 "core_mask": "0x1", 00:11:38.612 "workload": "randrw", 00:11:38.612 "percentage": 50, 00:11:38.612 "status": "finished", 00:11:38.612 "queue_depth": 1, 00:11:38.612 "io_size": 131072, 00:11:38.612 "runtime": 1.410229, 00:11:38.612 "iops": 14225.34921633295, 00:11:38.612 "mibps": 1778.1686520416188, 00:11:38.612 "io_failed": 1, 00:11:38.612 "io_timeout": 0, 00:11:38.612 "avg_latency_us": 97.06405096166948, 00:11:38.612 "min_latency_us": 28.50655021834061, 00:11:38.612 "max_latency_us": 1702.7912663755458 00:11:38.612 } 00:11:38.612 ], 00:11:38.612 "core_count": 1 00:11:38.612 } 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61583 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61583 ']' 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61583 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61583 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61583' 00:11:38.612 killing process with pid 61583 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61583 00:11:38.612 [2024-11-27 14:11:09.394696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.612 14:11:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61583 00:11:38.612 [2024-11-27 14:11:09.537171] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.990 14:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aO63rqNpgL 00:11:39.990 14:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:39.990 14:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:39.990 14:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:39.990 14:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:39.990 14:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:39.990 14:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:39.990 14:11:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:39.990 00:11:39.990 real 0m4.629s 00:11:39.990 user 0m5.654s 00:11:39.990 sys 0m0.541s 00:11:39.990 ************************************ 00:11:39.990 END TEST raid_read_error_test 00:11:39.990 ************************************ 00:11:39.990 14:11:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.990 14:11:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.990 14:11:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:11:39.990 14:11:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:39.990 14:11:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.990 14:11:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.990 ************************************ 00:11:39.990 START TEST raid_write_error_test 00:11:39.990 ************************************ 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2OYRSnxeoK 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61729 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61729 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61729 ']' 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.990 14:11:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.249 [2024-11-27 14:11:10.997519] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:40.249 [2024-11-27 14:11:10.997729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61729 ] 00:11:40.249 [2024-11-27 14:11:11.177827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.507 [2024-11-27 14:11:11.305940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.766 [2024-11-27 14:11:11.513856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.767 [2024-11-27 14:11:11.514016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.026 BaseBdev1_malloc 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.026 true 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.026 [2024-11-27 14:11:11.934586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:41.026 [2024-11-27 14:11:11.934644] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.026 [2024-11-27 14:11:11.934668] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:41.026 [2024-11-27 14:11:11.934679] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.026 [2024-11-27 14:11:11.937049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.026 BaseBdev1 00:11:41.026 [2024-11-27 14:11:11.937185] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.026 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.286 BaseBdev2_malloc 00:11:41.286 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.286 14:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:41.286 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.286 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.286 true 00:11:41.286 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.286 14:11:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:41.286 14:11:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.286 [2024-11-27 14:11:12.006162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:41.286 [2024-11-27 14:11:12.006266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.286 [2024-11-27 14:11:12.006327] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:41.286 [2024-11-27 14:11:12.006369] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.286 [2024-11-27 14:11:12.008865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.286 [2024-11-27 14:11:12.008955] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:41.286 BaseBdev2 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.286 [2024-11-27 14:11:12.018207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.286 [2024-11-27 14:11:12.020339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.286 [2024-11-27 14:11:12.020617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:41.286 [2024-11-27 14:11:12.020679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:41.286 [2024-11-27 14:11:12.021000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:41.286 [2024-11-27 14:11:12.021266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:41.286 [2024-11-27 14:11:12.021321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:41.286 [2024-11-27 14:11:12.021598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.286 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.286 "name": "raid_bdev1", 00:11:41.286 "uuid": "2b3d1dc6-d2f8-46ef-a793-0e0556286413", 00:11:41.286 "strip_size_kb": 64, 00:11:41.286 "state": "online", 00:11:41.286 "raid_level": "raid0", 00:11:41.286 "superblock": true, 00:11:41.286 "num_base_bdevs": 2, 00:11:41.286 "num_base_bdevs_discovered": 2, 00:11:41.286 "num_base_bdevs_operational": 2, 00:11:41.286 "base_bdevs_list": [ 00:11:41.286 { 00:11:41.286 "name": "BaseBdev1", 00:11:41.286 "uuid": "87bdf1ac-017c-5ce6-a6ae-29966cfa510e", 00:11:41.287 "is_configured": true, 00:11:41.287 "data_offset": 2048, 00:11:41.287 "data_size": 63488 00:11:41.287 }, 00:11:41.287 { 00:11:41.287 "name": "BaseBdev2", 00:11:41.287 "uuid": "41eb136d-a5a2-5fd5-83a6-0542a221511b", 00:11:41.287 "is_configured": true, 00:11:41.287 "data_offset": 2048, 00:11:41.287 "data_size": 63488 00:11:41.287 } 00:11:41.287 ] 00:11:41.287 }' 00:11:41.287 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.287 14:11:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.547 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:41.547 14:11:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:41.806 [2024-11-27 14:11:12.566547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.743 "name": "raid_bdev1", 00:11:42.743 "uuid": "2b3d1dc6-d2f8-46ef-a793-0e0556286413", 00:11:42.743 "strip_size_kb": 64, 00:11:42.743 "state": "online", 00:11:42.743 "raid_level": "raid0", 00:11:42.743 "superblock": true, 00:11:42.743 "num_base_bdevs": 2, 00:11:42.743 "num_base_bdevs_discovered": 2, 00:11:42.743 "num_base_bdevs_operational": 2, 00:11:42.743 "base_bdevs_list": [ 00:11:42.743 { 00:11:42.743 "name": "BaseBdev1", 00:11:42.743 "uuid": "87bdf1ac-017c-5ce6-a6ae-29966cfa510e", 00:11:42.743 "is_configured": true, 00:11:42.743 "data_offset": 2048, 00:11:42.743 "data_size": 63488 00:11:42.743 }, 00:11:42.743 { 00:11:42.743 "name": "BaseBdev2", 00:11:42.743 "uuid": "41eb136d-a5a2-5fd5-83a6-0542a221511b", 00:11:42.743 "is_configured": true, 00:11:42.743 "data_offset": 2048, 00:11:42.743 "data_size": 63488 00:11:42.743 } 00:11:42.743 ] 00:11:42.743 }' 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.743 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.324 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.324 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.324 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.324 [2024-11-27 14:11:13.987004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.324 [2024-11-27 14:11:13.987107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.324 [2024-11-27 14:11:13.990144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.324 [2024-11-27 14:11:13.990224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.324 [2024-11-27 14:11:13.990275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.324 [2024-11-27 14:11:13.990318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:43.324 { 00:11:43.324 "results": [ 00:11:43.324 { 00:11:43.324 "job": "raid_bdev1", 00:11:43.324 "core_mask": "0x1", 00:11:43.324 "workload": "randrw", 00:11:43.324 "percentage": 50, 00:11:43.324 "status": "finished", 00:11:43.324 "queue_depth": 1, 00:11:43.324 "io_size": 131072, 00:11:43.324 "runtime": 1.421482, 00:11:43.324 "iops": 14446.894156943246, 00:11:43.324 "mibps": 1805.8617696179058, 00:11:43.324 "io_failed": 1, 00:11:43.324 "io_timeout": 0, 00:11:43.324 "avg_latency_us": 95.65648962900701, 00:11:43.324 "min_latency_us": 26.829694323144103, 00:11:43.324 "max_latency_us": 1473.844541484716 00:11:43.324 } 00:11:43.324 ], 00:11:43.324 "core_count": 1 00:11:43.324 } 00:11:43.324 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.324 14:11:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61729 00:11:43.324 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61729 ']' 00:11:43.324 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61729 00:11:43.324 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:43.324 14:11:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.324 14:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61729 00:11:43.324 14:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.324 14:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.324 killing process with pid 61729 00:11:43.325 14:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61729' 00:11:43.325 14:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61729 00:11:43.325 [2024-11-27 14:11:14.037479] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.325 14:11:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61729 00:11:43.325 [2024-11-27 14:11:14.184436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.735 14:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:44.735 14:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2OYRSnxeoK 00:11:44.735 14:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:44.735 14:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:44.735 14:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:44.735 ************************************ 00:11:44.735 END TEST raid_write_error_test 00:11:44.735 ************************************ 00:11:44.735 14:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:44.735 14:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:44.735 14:11:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:44.735 00:11:44.735 real 0m4.545s 00:11:44.735 user 0m5.510s 00:11:44.735 sys 0m0.561s 00:11:44.735 14:11:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.735 14:11:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.735 14:11:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:44.735 14:11:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:11:44.735 14:11:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:44.735 14:11:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.735 14:11:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.735 ************************************ 00:11:44.735 START TEST raid_state_function_test 00:11:44.735 ************************************ 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61872 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61872' 00:11:44.735 Process raid pid: 61872 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61872 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61872 ']' 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.735 14:11:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.735 [2024-11-27 14:11:15.601508] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:44.735 [2024-11-27 14:11:15.601711] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.995 [2024-11-27 14:11:15.777242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.995 [2024-11-27 14:11:15.897942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.255 [2024-11-27 14:11:16.116042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.255 [2024-11-27 14:11:16.116097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.825 [2024-11-27 14:11:16.487199] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:45.825 [2024-11-27 14:11:16.487318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:45.825 [2024-11-27 14:11:16.487333] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.825 [2024-11-27 14:11:16.487359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.825 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.826 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.826 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.826 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.826 "name": "Existed_Raid", 00:11:45.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.826 "strip_size_kb": 64, 00:11:45.826 "state": "configuring", 00:11:45.826 "raid_level": "concat", 00:11:45.826 "superblock": false, 00:11:45.826 "num_base_bdevs": 2, 00:11:45.826 "num_base_bdevs_discovered": 0, 00:11:45.826 "num_base_bdevs_operational": 2, 00:11:45.826 "base_bdevs_list": [ 00:11:45.826 { 00:11:45.826 "name": "BaseBdev1", 00:11:45.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.826 "is_configured": false, 00:11:45.826 "data_offset": 0, 00:11:45.826 "data_size": 0 00:11:45.826 }, 00:11:45.826 { 00:11:45.826 "name": "BaseBdev2", 00:11:45.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.826 "is_configured": false, 00:11:45.826 "data_offset": 0, 00:11:45.826 "data_size": 0 00:11:45.826 } 00:11:45.826 ] 00:11:45.826 }' 00:11:45.826 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.826 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.085 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:46.085 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.085 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.085 [2024-11-27 14:11:16.978278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.085 [2024-11-27 14:11:16.978369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:46.085 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.085 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:46.085 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.085 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.085 [2024-11-27 14:11:16.990249] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:46.085 [2024-11-27 14:11:16.990330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:46.085 [2024-11-27 14:11:16.990358] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:46.085 [2024-11-27 14:11:16.990384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:46.085 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.086 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:46.086 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.086 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.344 [2024-11-27 14:11:17.041237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.344 BaseBdev1 00:11:46.344 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.344 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:46.344 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:46.344 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.344 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:46.344 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.344 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.344 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.345 [ 00:11:46.345 { 00:11:46.345 "name": "BaseBdev1", 00:11:46.345 "aliases": [ 00:11:46.345 "e4ceb353-3ee3-471a-a98f-415644f21f2c" 00:11:46.345 ], 00:11:46.345 "product_name": "Malloc disk", 00:11:46.345 "block_size": 512, 00:11:46.345 "num_blocks": 65536, 00:11:46.345 "uuid": "e4ceb353-3ee3-471a-a98f-415644f21f2c", 00:11:46.345 "assigned_rate_limits": { 00:11:46.345 "rw_ios_per_sec": 0, 00:11:46.345 "rw_mbytes_per_sec": 0, 00:11:46.345 "r_mbytes_per_sec": 0, 00:11:46.345 "w_mbytes_per_sec": 0 00:11:46.345 }, 00:11:46.345 "claimed": true, 00:11:46.345 "claim_type": "exclusive_write", 00:11:46.345 "zoned": false, 00:11:46.345 "supported_io_types": { 00:11:46.345 "read": true, 00:11:46.345 "write": true, 00:11:46.345 "unmap": true, 00:11:46.345 "flush": true, 00:11:46.345 "reset": true, 00:11:46.345 "nvme_admin": false, 00:11:46.345 "nvme_io": false, 00:11:46.345 "nvme_io_md": false, 00:11:46.345 "write_zeroes": true, 00:11:46.345 "zcopy": true, 00:11:46.345 "get_zone_info": false, 00:11:46.345 "zone_management": false, 00:11:46.345 "zone_append": false, 00:11:46.345 "compare": false, 00:11:46.345 "compare_and_write": false, 00:11:46.345 "abort": true, 00:11:46.345 "seek_hole": false, 00:11:46.345 "seek_data": false, 00:11:46.345 "copy": true, 00:11:46.345 "nvme_iov_md": false 00:11:46.345 }, 00:11:46.345 "memory_domains": [ 00:11:46.345 { 00:11:46.345 "dma_device_id": "system", 00:11:46.345 "dma_device_type": 1 00:11:46.345 }, 00:11:46.345 { 00:11:46.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.345 "dma_device_type": 2 00:11:46.345 } 00:11:46.345 ], 00:11:46.345 "driver_specific": {} 00:11:46.345 } 00:11:46.345 ] 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.345 "name": "Existed_Raid", 00:11:46.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.345 "strip_size_kb": 64, 00:11:46.345 "state": "configuring", 00:11:46.345 "raid_level": "concat", 00:11:46.345 "superblock": false, 00:11:46.345 "num_base_bdevs": 2, 00:11:46.345 "num_base_bdevs_discovered": 1, 00:11:46.345 "num_base_bdevs_operational": 2, 00:11:46.345 "base_bdevs_list": [ 00:11:46.345 { 00:11:46.345 "name": "BaseBdev1", 00:11:46.345 "uuid": "e4ceb353-3ee3-471a-a98f-415644f21f2c", 00:11:46.345 "is_configured": true, 00:11:46.345 "data_offset": 0, 00:11:46.345 "data_size": 65536 00:11:46.345 }, 00:11:46.345 { 00:11:46.345 "name": "BaseBdev2", 00:11:46.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.345 "is_configured": false, 00:11:46.345 "data_offset": 0, 00:11:46.345 "data_size": 0 00:11:46.345 } 00:11:46.345 ] 00:11:46.345 }' 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.345 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.643 [2024-11-27 14:11:17.524464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:46.643 [2024-11-27 14:11:17.524578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.643 [2024-11-27 14:11:17.532503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.643 [2024-11-27 14:11:17.534600] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:46.643 [2024-11-27 14:11:17.534646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.643 "name": "Existed_Raid", 00:11:46.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.643 "strip_size_kb": 64, 00:11:46.643 "state": "configuring", 00:11:46.643 "raid_level": "concat", 00:11:46.643 "superblock": false, 00:11:46.643 "num_base_bdevs": 2, 00:11:46.643 "num_base_bdevs_discovered": 1, 00:11:46.643 "num_base_bdevs_operational": 2, 00:11:46.643 "base_bdevs_list": [ 00:11:46.643 { 00:11:46.643 "name": "BaseBdev1", 00:11:46.643 "uuid": "e4ceb353-3ee3-471a-a98f-415644f21f2c", 00:11:46.643 "is_configured": true, 00:11:46.643 "data_offset": 0, 00:11:46.643 "data_size": 65536 00:11:46.643 }, 00:11:46.643 { 00:11:46.643 "name": "BaseBdev2", 00:11:46.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.643 "is_configured": false, 00:11:46.643 "data_offset": 0, 00:11:46.643 "data_size": 0 00:11:46.643 } 00:11:46.643 ] 00:11:46.643 }' 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.643 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.215 [2024-11-27 14:11:18.063841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.215 [2024-11-27 14:11:18.063888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:47.215 [2024-11-27 14:11:18.063896] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:47.215 [2024-11-27 14:11:18.064216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:47.215 [2024-11-27 14:11:18.064415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:47.215 [2024-11-27 14:11:18.064430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:47.215 [2024-11-27 14:11:18.064743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.215 BaseBdev2 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.215 [ 00:11:47.215 { 00:11:47.215 "name": "BaseBdev2", 00:11:47.215 "aliases": [ 00:11:47.215 "003743ca-c125-49d6-9229-5a8b03364715" 00:11:47.215 ], 00:11:47.215 "product_name": "Malloc disk", 00:11:47.215 "block_size": 512, 00:11:47.215 "num_blocks": 65536, 00:11:47.215 "uuid": "003743ca-c125-49d6-9229-5a8b03364715", 00:11:47.215 "assigned_rate_limits": { 00:11:47.215 "rw_ios_per_sec": 0, 00:11:47.215 "rw_mbytes_per_sec": 0, 00:11:47.215 "r_mbytes_per_sec": 0, 00:11:47.215 "w_mbytes_per_sec": 0 00:11:47.215 }, 00:11:47.215 "claimed": true, 00:11:47.215 "claim_type": "exclusive_write", 00:11:47.215 "zoned": false, 00:11:47.215 "supported_io_types": { 00:11:47.215 "read": true, 00:11:47.215 "write": true, 00:11:47.215 "unmap": true, 00:11:47.215 "flush": true, 00:11:47.215 "reset": true, 00:11:47.215 "nvme_admin": false, 00:11:47.215 "nvme_io": false, 00:11:47.215 "nvme_io_md": false, 00:11:47.215 "write_zeroes": true, 00:11:47.215 "zcopy": true, 00:11:47.215 "get_zone_info": false, 00:11:47.215 "zone_management": false, 00:11:47.215 "zone_append": false, 00:11:47.215 "compare": false, 00:11:47.215 "compare_and_write": false, 00:11:47.215 "abort": true, 00:11:47.215 "seek_hole": false, 00:11:47.215 "seek_data": false, 00:11:47.215 "copy": true, 00:11:47.215 "nvme_iov_md": false 00:11:47.215 }, 00:11:47.215 "memory_domains": [ 00:11:47.215 { 00:11:47.215 "dma_device_id": "system", 00:11:47.215 "dma_device_type": 1 00:11:47.215 }, 00:11:47.215 { 00:11:47.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.215 "dma_device_type": 2 00:11:47.215 } 00:11:47.215 ], 00:11:47.215 "driver_specific": {} 00:11:47.215 } 00:11:47.215 ] 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.215 "name": "Existed_Raid", 00:11:47.215 "uuid": "7a5dcff1-c58f-4505-b1b2-087d90cb6c92", 00:11:47.215 "strip_size_kb": 64, 00:11:47.215 "state": "online", 00:11:47.215 "raid_level": "concat", 00:11:47.215 "superblock": false, 00:11:47.215 "num_base_bdevs": 2, 00:11:47.215 "num_base_bdevs_discovered": 2, 00:11:47.215 "num_base_bdevs_operational": 2, 00:11:47.215 "base_bdevs_list": [ 00:11:47.215 { 00:11:47.215 "name": "BaseBdev1", 00:11:47.215 "uuid": "e4ceb353-3ee3-471a-a98f-415644f21f2c", 00:11:47.215 "is_configured": true, 00:11:47.215 "data_offset": 0, 00:11:47.215 "data_size": 65536 00:11:47.215 }, 00:11:47.215 { 00:11:47.215 "name": "BaseBdev2", 00:11:47.215 "uuid": "003743ca-c125-49d6-9229-5a8b03364715", 00:11:47.215 "is_configured": true, 00:11:47.215 "data_offset": 0, 00:11:47.215 "data_size": 65536 00:11:47.215 } 00:11:47.215 ] 00:11:47.215 }' 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.215 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.783 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.784 [2024-11-27 14:11:18.555396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.784 "name": "Existed_Raid", 00:11:47.784 "aliases": [ 00:11:47.784 "7a5dcff1-c58f-4505-b1b2-087d90cb6c92" 00:11:47.784 ], 00:11:47.784 "product_name": "Raid Volume", 00:11:47.784 "block_size": 512, 00:11:47.784 "num_blocks": 131072, 00:11:47.784 "uuid": "7a5dcff1-c58f-4505-b1b2-087d90cb6c92", 00:11:47.784 "assigned_rate_limits": { 00:11:47.784 "rw_ios_per_sec": 0, 00:11:47.784 "rw_mbytes_per_sec": 0, 00:11:47.784 "r_mbytes_per_sec": 0, 00:11:47.784 "w_mbytes_per_sec": 0 00:11:47.784 }, 00:11:47.784 "claimed": false, 00:11:47.784 "zoned": false, 00:11:47.784 "supported_io_types": { 00:11:47.784 "read": true, 00:11:47.784 "write": true, 00:11:47.784 "unmap": true, 00:11:47.784 "flush": true, 00:11:47.784 "reset": true, 00:11:47.784 "nvme_admin": false, 00:11:47.784 "nvme_io": false, 00:11:47.784 "nvme_io_md": false, 00:11:47.784 "write_zeroes": true, 00:11:47.784 "zcopy": false, 00:11:47.784 "get_zone_info": false, 00:11:47.784 "zone_management": false, 00:11:47.784 "zone_append": false, 00:11:47.784 "compare": false, 00:11:47.784 "compare_and_write": false, 00:11:47.784 "abort": false, 00:11:47.784 "seek_hole": false, 00:11:47.784 "seek_data": false, 00:11:47.784 "copy": false, 00:11:47.784 "nvme_iov_md": false 00:11:47.784 }, 00:11:47.784 "memory_domains": [ 00:11:47.784 { 00:11:47.784 "dma_device_id": "system", 00:11:47.784 "dma_device_type": 1 00:11:47.784 }, 00:11:47.784 { 00:11:47.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.784 "dma_device_type": 2 00:11:47.784 }, 00:11:47.784 { 00:11:47.784 "dma_device_id": "system", 00:11:47.784 "dma_device_type": 1 00:11:47.784 }, 00:11:47.784 { 00:11:47.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.784 "dma_device_type": 2 00:11:47.784 } 00:11:47.784 ], 00:11:47.784 "driver_specific": { 00:11:47.784 "raid": { 00:11:47.784 "uuid": "7a5dcff1-c58f-4505-b1b2-087d90cb6c92", 00:11:47.784 "strip_size_kb": 64, 00:11:47.784 "state": "online", 00:11:47.784 "raid_level": "concat", 00:11:47.784 "superblock": false, 00:11:47.784 "num_base_bdevs": 2, 00:11:47.784 "num_base_bdevs_discovered": 2, 00:11:47.784 "num_base_bdevs_operational": 2, 00:11:47.784 "base_bdevs_list": [ 00:11:47.784 { 00:11:47.784 "name": "BaseBdev1", 00:11:47.784 "uuid": "e4ceb353-3ee3-471a-a98f-415644f21f2c", 00:11:47.784 "is_configured": true, 00:11:47.784 "data_offset": 0, 00:11:47.784 "data_size": 65536 00:11:47.784 }, 00:11:47.784 { 00:11:47.784 "name": "BaseBdev2", 00:11:47.784 "uuid": "003743ca-c125-49d6-9229-5a8b03364715", 00:11:47.784 "is_configured": true, 00:11:47.784 "data_offset": 0, 00:11:47.784 "data_size": 65536 00:11:47.784 } 00:11:47.784 ] 00:11:47.784 } 00:11:47.784 } 00:11:47.784 }' 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:47.784 BaseBdev2' 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.784 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.043 [2024-11-27 14:11:18.754786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.043 [2024-11-27 14:11:18.754878] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.043 [2024-11-27 14:11:18.754977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.043 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.043 "name": "Existed_Raid", 00:11:48.043 "uuid": "7a5dcff1-c58f-4505-b1b2-087d90cb6c92", 00:11:48.043 "strip_size_kb": 64, 00:11:48.043 "state": "offline", 00:11:48.043 "raid_level": "concat", 00:11:48.043 "superblock": false, 00:11:48.043 "num_base_bdevs": 2, 00:11:48.043 "num_base_bdevs_discovered": 1, 00:11:48.043 "num_base_bdevs_operational": 1, 00:11:48.043 "base_bdevs_list": [ 00:11:48.043 { 00:11:48.043 "name": null, 00:11:48.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.043 "is_configured": false, 00:11:48.043 "data_offset": 0, 00:11:48.043 "data_size": 65536 00:11:48.044 }, 00:11:48.044 { 00:11:48.044 "name": "BaseBdev2", 00:11:48.044 "uuid": "003743ca-c125-49d6-9229-5a8b03364715", 00:11:48.044 "is_configured": true, 00:11:48.044 "data_offset": 0, 00:11:48.044 "data_size": 65536 00:11:48.044 } 00:11:48.044 ] 00:11:48.044 }' 00:11:48.044 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.044 14:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.719 [2024-11-27 14:11:19.325459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.719 [2024-11-27 14:11:19.325520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61872 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61872 ']' 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61872 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61872 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.719 killing process with pid 61872 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61872' 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61872 00:11:48.719 [2024-11-27 14:11:19.524718] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.719 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61872 00:11:48.719 [2024-11-27 14:11:19.541959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:50.102 00:11:50.102 real 0m5.210s 00:11:50.102 user 0m7.517s 00:11:50.102 sys 0m0.838s 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.102 ************************************ 00:11:50.102 END TEST raid_state_function_test 00:11:50.102 ************************************ 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.102 14:11:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:11:50.102 14:11:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.102 14:11:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.102 14:11:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.102 ************************************ 00:11:50.102 START TEST raid_state_function_test_sb 00:11:50.102 ************************************ 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62125 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62125' 00:11:50.102 Process raid pid: 62125 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62125 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62125 ']' 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.102 14:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.102 [2024-11-27 14:11:20.882414] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:50.102 [2024-11-27 14:11:20.882635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.361 [2024-11-27 14:11:21.058168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.361 [2024-11-27 14:11:21.185039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.620 [2024-11-27 14:11:21.407256] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.620 [2024-11-27 14:11:21.407390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.879 [2024-11-27 14:11:21.746539] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.879 [2024-11-27 14:11:21.746660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.879 [2024-11-27 14:11:21.746703] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.879 [2024-11-27 14:11:21.746729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.879 "name": "Existed_Raid", 00:11:50.879 "uuid": "be9b01b2-53e9-4653-bd97-60f479c61d95", 00:11:50.879 "strip_size_kb": 64, 00:11:50.879 "state": "configuring", 00:11:50.879 "raid_level": "concat", 00:11:50.879 "superblock": true, 00:11:50.879 "num_base_bdevs": 2, 00:11:50.879 "num_base_bdevs_discovered": 0, 00:11:50.879 "num_base_bdevs_operational": 2, 00:11:50.879 "base_bdevs_list": [ 00:11:50.879 { 00:11:50.879 "name": "BaseBdev1", 00:11:50.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.879 "is_configured": false, 00:11:50.879 "data_offset": 0, 00:11:50.879 "data_size": 0 00:11:50.879 }, 00:11:50.879 { 00:11:50.879 "name": "BaseBdev2", 00:11:50.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.879 "is_configured": false, 00:11:50.879 "data_offset": 0, 00:11:50.879 "data_size": 0 00:11:50.879 } 00:11:50.879 ] 00:11:50.879 }' 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.879 14:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.446 [2024-11-27 14:11:22.209693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.446 [2024-11-27 14:11:22.209731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.446 [2024-11-27 14:11:22.221656] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:51.446 [2024-11-27 14:11:22.221708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:51.446 [2024-11-27 14:11:22.221718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.446 [2024-11-27 14:11:22.221729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.446 [2024-11-27 14:11:22.273562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.446 BaseBdev1 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.446 [ 00:11:51.446 { 00:11:51.446 "name": "BaseBdev1", 00:11:51.446 "aliases": [ 00:11:51.446 "44108527-8ad7-4946-adab-e536dfd98bf0" 00:11:51.446 ], 00:11:51.446 "product_name": "Malloc disk", 00:11:51.446 "block_size": 512, 00:11:51.446 "num_blocks": 65536, 00:11:51.446 "uuid": "44108527-8ad7-4946-adab-e536dfd98bf0", 00:11:51.446 "assigned_rate_limits": { 00:11:51.446 "rw_ios_per_sec": 0, 00:11:51.446 "rw_mbytes_per_sec": 0, 00:11:51.446 "r_mbytes_per_sec": 0, 00:11:51.446 "w_mbytes_per_sec": 0 00:11:51.446 }, 00:11:51.446 "claimed": true, 00:11:51.446 "claim_type": "exclusive_write", 00:11:51.446 "zoned": false, 00:11:51.446 "supported_io_types": { 00:11:51.446 "read": true, 00:11:51.446 "write": true, 00:11:51.446 "unmap": true, 00:11:51.446 "flush": true, 00:11:51.446 "reset": true, 00:11:51.446 "nvme_admin": false, 00:11:51.446 "nvme_io": false, 00:11:51.446 "nvme_io_md": false, 00:11:51.446 "write_zeroes": true, 00:11:51.446 "zcopy": true, 00:11:51.446 "get_zone_info": false, 00:11:51.446 "zone_management": false, 00:11:51.446 "zone_append": false, 00:11:51.446 "compare": false, 00:11:51.446 "compare_and_write": false, 00:11:51.446 "abort": true, 00:11:51.446 "seek_hole": false, 00:11:51.446 "seek_data": false, 00:11:51.446 "copy": true, 00:11:51.446 "nvme_iov_md": false 00:11:51.446 }, 00:11:51.446 "memory_domains": [ 00:11:51.446 { 00:11:51.446 "dma_device_id": "system", 00:11:51.446 "dma_device_type": 1 00:11:51.446 }, 00:11:51.446 { 00:11:51.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.446 "dma_device_type": 2 00:11:51.446 } 00:11:51.446 ], 00:11:51.446 "driver_specific": {} 00:11:51.446 } 00:11:51.446 ] 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.446 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.447 "name": "Existed_Raid", 00:11:51.447 "uuid": "1122782c-a0ce-446c-b656-87c7ac105f0f", 00:11:51.447 "strip_size_kb": 64, 00:11:51.447 "state": "configuring", 00:11:51.447 "raid_level": "concat", 00:11:51.447 "superblock": true, 00:11:51.447 "num_base_bdevs": 2, 00:11:51.447 "num_base_bdevs_discovered": 1, 00:11:51.447 "num_base_bdevs_operational": 2, 00:11:51.447 "base_bdevs_list": [ 00:11:51.447 { 00:11:51.447 "name": "BaseBdev1", 00:11:51.447 "uuid": "44108527-8ad7-4946-adab-e536dfd98bf0", 00:11:51.447 "is_configured": true, 00:11:51.447 "data_offset": 2048, 00:11:51.447 "data_size": 63488 00:11:51.447 }, 00:11:51.447 { 00:11:51.447 "name": "BaseBdev2", 00:11:51.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.447 "is_configured": false, 00:11:51.447 "data_offset": 0, 00:11:51.447 "data_size": 0 00:11:51.447 } 00:11:51.447 ] 00:11:51.447 }' 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.447 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.014 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:52.014 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.014 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.014 [2024-11-27 14:11:22.772808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:52.014 [2024-11-27 14:11:22.772934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:52.014 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.015 [2024-11-27 14:11:22.784835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.015 [2024-11-27 14:11:22.786748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.015 [2024-11-27 14:11:22.786841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.015 "name": "Existed_Raid", 00:11:52.015 "uuid": "5bb6ddbd-f081-4c03-9c4a-1cc6f63d9db7", 00:11:52.015 "strip_size_kb": 64, 00:11:52.015 "state": "configuring", 00:11:52.015 "raid_level": "concat", 00:11:52.015 "superblock": true, 00:11:52.015 "num_base_bdevs": 2, 00:11:52.015 "num_base_bdevs_discovered": 1, 00:11:52.015 "num_base_bdevs_operational": 2, 00:11:52.015 "base_bdevs_list": [ 00:11:52.015 { 00:11:52.015 "name": "BaseBdev1", 00:11:52.015 "uuid": "44108527-8ad7-4946-adab-e536dfd98bf0", 00:11:52.015 "is_configured": true, 00:11:52.015 "data_offset": 2048, 00:11:52.015 "data_size": 63488 00:11:52.015 }, 00:11:52.015 { 00:11:52.015 "name": "BaseBdev2", 00:11:52.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.015 "is_configured": false, 00:11:52.015 "data_offset": 0, 00:11:52.015 "data_size": 0 00:11:52.015 } 00:11:52.015 ] 00:11:52.015 }' 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.015 14:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.274 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:52.274 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.274 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.533 [2024-11-27 14:11:23.259576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.533 [2024-11-27 14:11:23.259856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:52.533 [2024-11-27 14:11:23.259874] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:52.533 [2024-11-27 14:11:23.260272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:52.533 BaseBdev2 00:11:52.533 [2024-11-27 14:11:23.260522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:52.533 [2024-11-27 14:11:23.260542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:52.534 [2024-11-27 14:11:23.260702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.534 [ 00:11:52.534 { 00:11:52.534 "name": "BaseBdev2", 00:11:52.534 "aliases": [ 00:11:52.534 "d35417bf-1917-44c8-b3b3-5ebdec29bc81" 00:11:52.534 ], 00:11:52.534 "product_name": "Malloc disk", 00:11:52.534 "block_size": 512, 00:11:52.534 "num_blocks": 65536, 00:11:52.534 "uuid": "d35417bf-1917-44c8-b3b3-5ebdec29bc81", 00:11:52.534 "assigned_rate_limits": { 00:11:52.534 "rw_ios_per_sec": 0, 00:11:52.534 "rw_mbytes_per_sec": 0, 00:11:52.534 "r_mbytes_per_sec": 0, 00:11:52.534 "w_mbytes_per_sec": 0 00:11:52.534 }, 00:11:52.534 "claimed": true, 00:11:52.534 "claim_type": "exclusive_write", 00:11:52.534 "zoned": false, 00:11:52.534 "supported_io_types": { 00:11:52.534 "read": true, 00:11:52.534 "write": true, 00:11:52.534 "unmap": true, 00:11:52.534 "flush": true, 00:11:52.534 "reset": true, 00:11:52.534 "nvme_admin": false, 00:11:52.534 "nvme_io": false, 00:11:52.534 "nvme_io_md": false, 00:11:52.534 "write_zeroes": true, 00:11:52.534 "zcopy": true, 00:11:52.534 "get_zone_info": false, 00:11:52.534 "zone_management": false, 00:11:52.534 "zone_append": false, 00:11:52.534 "compare": false, 00:11:52.534 "compare_and_write": false, 00:11:52.534 "abort": true, 00:11:52.534 "seek_hole": false, 00:11:52.534 "seek_data": false, 00:11:52.534 "copy": true, 00:11:52.534 "nvme_iov_md": false 00:11:52.534 }, 00:11:52.534 "memory_domains": [ 00:11:52.534 { 00:11:52.534 "dma_device_id": "system", 00:11:52.534 "dma_device_type": 1 00:11:52.534 }, 00:11:52.534 { 00:11:52.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.534 "dma_device_type": 2 00:11:52.534 } 00:11:52.534 ], 00:11:52.534 "driver_specific": {} 00:11:52.534 } 00:11:52.534 ] 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.534 "name": "Existed_Raid", 00:11:52.534 "uuid": "5bb6ddbd-f081-4c03-9c4a-1cc6f63d9db7", 00:11:52.534 "strip_size_kb": 64, 00:11:52.534 "state": "online", 00:11:52.534 "raid_level": "concat", 00:11:52.534 "superblock": true, 00:11:52.534 "num_base_bdevs": 2, 00:11:52.534 "num_base_bdevs_discovered": 2, 00:11:52.534 "num_base_bdevs_operational": 2, 00:11:52.534 "base_bdevs_list": [ 00:11:52.534 { 00:11:52.534 "name": "BaseBdev1", 00:11:52.534 "uuid": "44108527-8ad7-4946-adab-e536dfd98bf0", 00:11:52.534 "is_configured": true, 00:11:52.534 "data_offset": 2048, 00:11:52.534 "data_size": 63488 00:11:52.534 }, 00:11:52.534 { 00:11:52.534 "name": "BaseBdev2", 00:11:52.534 "uuid": "d35417bf-1917-44c8-b3b3-5ebdec29bc81", 00:11:52.534 "is_configured": true, 00:11:52.534 "data_offset": 2048, 00:11:52.534 "data_size": 63488 00:11:52.534 } 00:11:52.534 ] 00:11:52.534 }' 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.534 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.102 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.102 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:53.102 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.102 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.102 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.102 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.102 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:53.102 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.103 [2024-11-27 14:11:23.771052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.103 "name": "Existed_Raid", 00:11:53.103 "aliases": [ 00:11:53.103 "5bb6ddbd-f081-4c03-9c4a-1cc6f63d9db7" 00:11:53.103 ], 00:11:53.103 "product_name": "Raid Volume", 00:11:53.103 "block_size": 512, 00:11:53.103 "num_blocks": 126976, 00:11:53.103 "uuid": "5bb6ddbd-f081-4c03-9c4a-1cc6f63d9db7", 00:11:53.103 "assigned_rate_limits": { 00:11:53.103 "rw_ios_per_sec": 0, 00:11:53.103 "rw_mbytes_per_sec": 0, 00:11:53.103 "r_mbytes_per_sec": 0, 00:11:53.103 "w_mbytes_per_sec": 0 00:11:53.103 }, 00:11:53.103 "claimed": false, 00:11:53.103 "zoned": false, 00:11:53.103 "supported_io_types": { 00:11:53.103 "read": true, 00:11:53.103 "write": true, 00:11:53.103 "unmap": true, 00:11:53.103 "flush": true, 00:11:53.103 "reset": true, 00:11:53.103 "nvme_admin": false, 00:11:53.103 "nvme_io": false, 00:11:53.103 "nvme_io_md": false, 00:11:53.103 "write_zeroes": true, 00:11:53.103 "zcopy": false, 00:11:53.103 "get_zone_info": false, 00:11:53.103 "zone_management": false, 00:11:53.103 "zone_append": false, 00:11:53.103 "compare": false, 00:11:53.103 "compare_and_write": false, 00:11:53.103 "abort": false, 00:11:53.103 "seek_hole": false, 00:11:53.103 "seek_data": false, 00:11:53.103 "copy": false, 00:11:53.103 "nvme_iov_md": false 00:11:53.103 }, 00:11:53.103 "memory_domains": [ 00:11:53.103 { 00:11:53.103 "dma_device_id": "system", 00:11:53.103 "dma_device_type": 1 00:11:53.103 }, 00:11:53.103 { 00:11:53.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.103 "dma_device_type": 2 00:11:53.103 }, 00:11:53.103 { 00:11:53.103 "dma_device_id": "system", 00:11:53.103 "dma_device_type": 1 00:11:53.103 }, 00:11:53.103 { 00:11:53.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.103 "dma_device_type": 2 00:11:53.103 } 00:11:53.103 ], 00:11:53.103 "driver_specific": { 00:11:53.103 "raid": { 00:11:53.103 "uuid": "5bb6ddbd-f081-4c03-9c4a-1cc6f63d9db7", 00:11:53.103 "strip_size_kb": 64, 00:11:53.103 "state": "online", 00:11:53.103 "raid_level": "concat", 00:11:53.103 "superblock": true, 00:11:53.103 "num_base_bdevs": 2, 00:11:53.103 "num_base_bdevs_discovered": 2, 00:11:53.103 "num_base_bdevs_operational": 2, 00:11:53.103 "base_bdevs_list": [ 00:11:53.103 { 00:11:53.103 "name": "BaseBdev1", 00:11:53.103 "uuid": "44108527-8ad7-4946-adab-e536dfd98bf0", 00:11:53.103 "is_configured": true, 00:11:53.103 "data_offset": 2048, 00:11:53.103 "data_size": 63488 00:11:53.103 }, 00:11:53.103 { 00:11:53.103 "name": "BaseBdev2", 00:11:53.103 "uuid": "d35417bf-1917-44c8-b3b3-5ebdec29bc81", 00:11:53.103 "is_configured": true, 00:11:53.103 "data_offset": 2048, 00:11:53.103 "data_size": 63488 00:11:53.103 } 00:11:53.103 ] 00:11:53.103 } 00:11:53.103 } 00:11:53.103 }' 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:53.103 BaseBdev2' 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.103 14:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.103 [2024-11-27 14:11:23.982468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.103 [2024-11-27 14:11:23.982505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.103 [2024-11-27 14:11:23.982559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.363 "name": "Existed_Raid", 00:11:53.363 "uuid": "5bb6ddbd-f081-4c03-9c4a-1cc6f63d9db7", 00:11:53.363 "strip_size_kb": 64, 00:11:53.363 "state": "offline", 00:11:53.363 "raid_level": "concat", 00:11:53.363 "superblock": true, 00:11:53.363 "num_base_bdevs": 2, 00:11:53.363 "num_base_bdevs_discovered": 1, 00:11:53.363 "num_base_bdevs_operational": 1, 00:11:53.363 "base_bdevs_list": [ 00:11:53.363 { 00:11:53.363 "name": null, 00:11:53.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.363 "is_configured": false, 00:11:53.363 "data_offset": 0, 00:11:53.363 "data_size": 63488 00:11:53.363 }, 00:11:53.363 { 00:11:53.363 "name": "BaseBdev2", 00:11:53.363 "uuid": "d35417bf-1917-44c8-b3b3-5ebdec29bc81", 00:11:53.363 "is_configured": true, 00:11:53.363 "data_offset": 2048, 00:11:53.363 "data_size": 63488 00:11:53.363 } 00:11:53.363 ] 00:11:53.363 }' 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.363 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.623 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.623 [2024-11-27 14:11:24.564601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:53.623 [2024-11-27 14:11:24.564720] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:53.888 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.888 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:53.888 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:53.888 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62125 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62125 ']' 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62125 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62125 00:11:53.889 killing process with pid 62125 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62125' 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62125 00:11:53.889 [2024-11-27 14:11:24.764321] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.889 14:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62125 00:11:53.889 [2024-11-27 14:11:24.784011] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.277 14:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:55.277 00:11:55.277 real 0m5.190s 00:11:55.277 user 0m7.472s 00:11:55.277 sys 0m0.809s 00:11:55.277 ************************************ 00:11:55.277 END TEST raid_state_function_test_sb 00:11:55.277 ************************************ 00:11:55.277 14:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.277 14:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.277 14:11:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:11:55.277 14:11:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:55.277 14:11:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.277 14:11:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.277 ************************************ 00:11:55.277 START TEST raid_superblock_test 00:11:55.277 ************************************ 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62376 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62376 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62376 ']' 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.277 14:11:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.277 [2024-11-27 14:11:26.140461] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:11:55.277 [2024-11-27 14:11:26.141157] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62376 ] 00:11:55.537 [2024-11-27 14:11:26.298250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.537 [2024-11-27 14:11:26.420300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.796 [2024-11-27 14:11:26.624929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.796 [2024-11-27 14:11:26.625119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.055 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.055 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.314 malloc1 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.314 [2024-11-27 14:11:27.066104] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:56.314 [2024-11-27 14:11:27.066239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.314 [2024-11-27 14:11:27.066294] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:56.314 [2024-11-27 14:11:27.066330] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.314 [2024-11-27 14:11:27.069194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.314 [2024-11-27 14:11:27.069288] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:56.314 pt1 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.314 malloc2 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.314 [2024-11-27 14:11:27.127223] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:56.314 [2024-11-27 14:11:27.127290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.314 [2024-11-27 14:11:27.127318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:56.314 [2024-11-27 14:11:27.127329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.314 [2024-11-27 14:11:27.129816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.314 pt2 00:11:56.314 [2024-11-27 14:11:27.129930] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.314 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.314 [2024-11-27 14:11:27.139264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:56.314 [2024-11-27 14:11:27.141384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:56.315 [2024-11-27 14:11:27.141590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:56.315 [2024-11-27 14:11:27.141608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:56.315 [2024-11-27 14:11:27.141933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:56.315 [2024-11-27 14:11:27.142140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:56.315 [2024-11-27 14:11:27.142154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:56.315 [2024-11-27 14:11:27.142352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.315 "name": "raid_bdev1", 00:11:56.315 "uuid": "079532ec-d8cd-4298-9f52-0bd0273487ff", 00:11:56.315 "strip_size_kb": 64, 00:11:56.315 "state": "online", 00:11:56.315 "raid_level": "concat", 00:11:56.315 "superblock": true, 00:11:56.315 "num_base_bdevs": 2, 00:11:56.315 "num_base_bdevs_discovered": 2, 00:11:56.315 "num_base_bdevs_operational": 2, 00:11:56.315 "base_bdevs_list": [ 00:11:56.315 { 00:11:56.315 "name": "pt1", 00:11:56.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.315 "is_configured": true, 00:11:56.315 "data_offset": 2048, 00:11:56.315 "data_size": 63488 00:11:56.315 }, 00:11:56.315 { 00:11:56.315 "name": "pt2", 00:11:56.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.315 "is_configured": true, 00:11:56.315 "data_offset": 2048, 00:11:56.315 "data_size": 63488 00:11:56.315 } 00:11:56.315 ] 00:11:56.315 }' 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.315 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.883 [2024-11-27 14:11:27.642621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:56.883 "name": "raid_bdev1", 00:11:56.883 "aliases": [ 00:11:56.883 "079532ec-d8cd-4298-9f52-0bd0273487ff" 00:11:56.883 ], 00:11:56.883 "product_name": "Raid Volume", 00:11:56.883 "block_size": 512, 00:11:56.883 "num_blocks": 126976, 00:11:56.883 "uuid": "079532ec-d8cd-4298-9f52-0bd0273487ff", 00:11:56.883 "assigned_rate_limits": { 00:11:56.883 "rw_ios_per_sec": 0, 00:11:56.883 "rw_mbytes_per_sec": 0, 00:11:56.883 "r_mbytes_per_sec": 0, 00:11:56.883 "w_mbytes_per_sec": 0 00:11:56.883 }, 00:11:56.883 "claimed": false, 00:11:56.883 "zoned": false, 00:11:56.883 "supported_io_types": { 00:11:56.883 "read": true, 00:11:56.883 "write": true, 00:11:56.883 "unmap": true, 00:11:56.883 "flush": true, 00:11:56.883 "reset": true, 00:11:56.883 "nvme_admin": false, 00:11:56.883 "nvme_io": false, 00:11:56.883 "nvme_io_md": false, 00:11:56.883 "write_zeroes": true, 00:11:56.883 "zcopy": false, 00:11:56.883 "get_zone_info": false, 00:11:56.883 "zone_management": false, 00:11:56.883 "zone_append": false, 00:11:56.883 "compare": false, 00:11:56.883 "compare_and_write": false, 00:11:56.883 "abort": false, 00:11:56.883 "seek_hole": false, 00:11:56.883 "seek_data": false, 00:11:56.883 "copy": false, 00:11:56.883 "nvme_iov_md": false 00:11:56.883 }, 00:11:56.883 "memory_domains": [ 00:11:56.883 { 00:11:56.883 "dma_device_id": "system", 00:11:56.883 "dma_device_type": 1 00:11:56.883 }, 00:11:56.883 { 00:11:56.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.883 "dma_device_type": 2 00:11:56.883 }, 00:11:56.883 { 00:11:56.883 "dma_device_id": "system", 00:11:56.883 "dma_device_type": 1 00:11:56.883 }, 00:11:56.883 { 00:11:56.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.883 "dma_device_type": 2 00:11:56.883 } 00:11:56.883 ], 00:11:56.883 "driver_specific": { 00:11:56.883 "raid": { 00:11:56.883 "uuid": "079532ec-d8cd-4298-9f52-0bd0273487ff", 00:11:56.883 "strip_size_kb": 64, 00:11:56.883 "state": "online", 00:11:56.883 "raid_level": "concat", 00:11:56.883 "superblock": true, 00:11:56.883 "num_base_bdevs": 2, 00:11:56.883 "num_base_bdevs_discovered": 2, 00:11:56.883 "num_base_bdevs_operational": 2, 00:11:56.883 "base_bdevs_list": [ 00:11:56.883 { 00:11:56.883 "name": "pt1", 00:11:56.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.883 "is_configured": true, 00:11:56.883 "data_offset": 2048, 00:11:56.883 "data_size": 63488 00:11:56.883 }, 00:11:56.883 { 00:11:56.883 "name": "pt2", 00:11:56.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.883 "is_configured": true, 00:11:56.883 "data_offset": 2048, 00:11:56.883 "data_size": 63488 00:11:56.883 } 00:11:56.883 ] 00:11:56.883 } 00:11:56.883 } 00:11:56.883 }' 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:56.883 pt2' 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.883 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.884 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:57.144 [2024-11-27 14:11:27.890271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=079532ec-d8cd-4298-9f52-0bd0273487ff 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 079532ec-d8cd-4298-9f52-0bd0273487ff ']' 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.144 [2024-11-27 14:11:27.953788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.144 [2024-11-27 14:11:27.953822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.144 [2024-11-27 14:11:27.953922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.144 [2024-11-27 14:11:27.953979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.144 [2024-11-27 14:11:27.953993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:57.144 14:11:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.144 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.145 [2024-11-27 14:11:28.085598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:57.145 [2024-11-27 14:11:28.087564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:57.145 [2024-11-27 14:11:28.087686] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:57.145 [2024-11-27 14:11:28.087777] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:57.145 [2024-11-27 14:11:28.087832] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.145 [2024-11-27 14:11:28.087866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:57.145 request: 00:11:57.145 { 00:11:57.145 "name": "raid_bdev1", 00:11:57.145 "raid_level": "concat", 00:11:57.145 "base_bdevs": [ 00:11:57.145 "malloc1", 00:11:57.145 "malloc2" 00:11:57.145 ], 00:11:57.145 "strip_size_kb": 64, 00:11:57.145 "superblock": false, 00:11:57.145 "method": "bdev_raid_create", 00:11:57.145 "req_id": 1 00:11:57.145 } 00:11:57.145 Got JSON-RPC error response 00:11:57.145 response: 00:11:57.145 { 00:11:57.145 "code": -17, 00:11:57.145 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:57.145 } 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:57.145 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.405 [2024-11-27 14:11:28.153515] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:57.405 [2024-11-27 14:11:28.153688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.405 [2024-11-27 14:11:28.153713] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:57.405 [2024-11-27 14:11:28.153725] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.405 [2024-11-27 14:11:28.156296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.405 [2024-11-27 14:11:28.156340] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:57.405 [2024-11-27 14:11:28.156438] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:57.405 [2024-11-27 14:11:28.156502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:57.405 pt1 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.405 "name": "raid_bdev1", 00:11:57.405 "uuid": "079532ec-d8cd-4298-9f52-0bd0273487ff", 00:11:57.405 "strip_size_kb": 64, 00:11:57.405 "state": "configuring", 00:11:57.405 "raid_level": "concat", 00:11:57.405 "superblock": true, 00:11:57.405 "num_base_bdevs": 2, 00:11:57.405 "num_base_bdevs_discovered": 1, 00:11:57.405 "num_base_bdevs_operational": 2, 00:11:57.405 "base_bdevs_list": [ 00:11:57.405 { 00:11:57.405 "name": "pt1", 00:11:57.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.405 "is_configured": true, 00:11:57.405 "data_offset": 2048, 00:11:57.405 "data_size": 63488 00:11:57.405 }, 00:11:57.405 { 00:11:57.405 "name": null, 00:11:57.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.405 "is_configured": false, 00:11:57.405 "data_offset": 2048, 00:11:57.405 "data_size": 63488 00:11:57.405 } 00:11:57.405 ] 00:11:57.405 }' 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.405 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.664 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:57.664 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:57.664 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:57.664 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:57.664 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.664 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.664 [2024-11-27 14:11:28.580784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:57.664 [2024-11-27 14:11:28.580944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.664 [2024-11-27 14:11:28.581008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:57.664 [2024-11-27 14:11:28.581061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.664 [2024-11-27 14:11:28.581649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.664 [2024-11-27 14:11:28.581725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:57.664 [2024-11-27 14:11:28.581851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:57.664 [2024-11-27 14:11:28.581912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:57.664 [2024-11-27 14:11:28.582084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:57.664 [2024-11-27 14:11:28.582147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:57.664 [2024-11-27 14:11:28.582443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:57.664 [2024-11-27 14:11:28.582646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:57.664 [2024-11-27 14:11:28.582687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:57.664 [2024-11-27 14:11:28.582891] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.664 pt2 00:11:57.664 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.664 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:57.664 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:57.664 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:57.664 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.665 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.924 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.924 "name": "raid_bdev1", 00:11:57.924 "uuid": "079532ec-d8cd-4298-9f52-0bd0273487ff", 00:11:57.924 "strip_size_kb": 64, 00:11:57.924 "state": "online", 00:11:57.924 "raid_level": "concat", 00:11:57.924 "superblock": true, 00:11:57.924 "num_base_bdevs": 2, 00:11:57.924 "num_base_bdevs_discovered": 2, 00:11:57.924 "num_base_bdevs_operational": 2, 00:11:57.924 "base_bdevs_list": [ 00:11:57.924 { 00:11:57.924 "name": "pt1", 00:11:57.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.924 "is_configured": true, 00:11:57.924 "data_offset": 2048, 00:11:57.924 "data_size": 63488 00:11:57.924 }, 00:11:57.924 { 00:11:57.924 "name": "pt2", 00:11:57.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.924 "is_configured": true, 00:11:57.924 "data_offset": 2048, 00:11:57.924 "data_size": 63488 00:11:57.924 } 00:11:57.924 ] 00:11:57.924 }' 00:11:57.924 14:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.924 14:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:58.184 [2024-11-27 14:11:29.040498] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:58.184 "name": "raid_bdev1", 00:11:58.184 "aliases": [ 00:11:58.184 "079532ec-d8cd-4298-9f52-0bd0273487ff" 00:11:58.184 ], 00:11:58.184 "product_name": "Raid Volume", 00:11:58.184 "block_size": 512, 00:11:58.184 "num_blocks": 126976, 00:11:58.184 "uuid": "079532ec-d8cd-4298-9f52-0bd0273487ff", 00:11:58.184 "assigned_rate_limits": { 00:11:58.184 "rw_ios_per_sec": 0, 00:11:58.184 "rw_mbytes_per_sec": 0, 00:11:58.184 "r_mbytes_per_sec": 0, 00:11:58.184 "w_mbytes_per_sec": 0 00:11:58.184 }, 00:11:58.184 "claimed": false, 00:11:58.184 "zoned": false, 00:11:58.184 "supported_io_types": { 00:11:58.184 "read": true, 00:11:58.184 "write": true, 00:11:58.184 "unmap": true, 00:11:58.184 "flush": true, 00:11:58.184 "reset": true, 00:11:58.184 "nvme_admin": false, 00:11:58.184 "nvme_io": false, 00:11:58.184 "nvme_io_md": false, 00:11:58.184 "write_zeroes": true, 00:11:58.184 "zcopy": false, 00:11:58.184 "get_zone_info": false, 00:11:58.184 "zone_management": false, 00:11:58.184 "zone_append": false, 00:11:58.184 "compare": false, 00:11:58.184 "compare_and_write": false, 00:11:58.184 "abort": false, 00:11:58.184 "seek_hole": false, 00:11:58.184 "seek_data": false, 00:11:58.184 "copy": false, 00:11:58.184 "nvme_iov_md": false 00:11:58.184 }, 00:11:58.184 "memory_domains": [ 00:11:58.184 { 00:11:58.184 "dma_device_id": "system", 00:11:58.184 "dma_device_type": 1 00:11:58.184 }, 00:11:58.184 { 00:11:58.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.184 "dma_device_type": 2 00:11:58.184 }, 00:11:58.184 { 00:11:58.184 "dma_device_id": "system", 00:11:58.184 "dma_device_type": 1 00:11:58.184 }, 00:11:58.184 { 00:11:58.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.184 "dma_device_type": 2 00:11:58.184 } 00:11:58.184 ], 00:11:58.184 "driver_specific": { 00:11:58.184 "raid": { 00:11:58.184 "uuid": "079532ec-d8cd-4298-9f52-0bd0273487ff", 00:11:58.184 "strip_size_kb": 64, 00:11:58.184 "state": "online", 00:11:58.184 "raid_level": "concat", 00:11:58.184 "superblock": true, 00:11:58.184 "num_base_bdevs": 2, 00:11:58.184 "num_base_bdevs_discovered": 2, 00:11:58.184 "num_base_bdevs_operational": 2, 00:11:58.184 "base_bdevs_list": [ 00:11:58.184 { 00:11:58.184 "name": "pt1", 00:11:58.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.184 "is_configured": true, 00:11:58.184 "data_offset": 2048, 00:11:58.184 "data_size": 63488 00:11:58.184 }, 00:11:58.184 { 00:11:58.184 "name": "pt2", 00:11:58.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.184 "is_configured": true, 00:11:58.184 "data_offset": 2048, 00:11:58.184 "data_size": 63488 00:11:58.184 } 00:11:58.184 ] 00:11:58.184 } 00:11:58.184 } 00:11:58.184 }' 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:58.184 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:58.184 pt2' 00:11:58.442 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.442 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:58.442 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.442 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:58.443 [2024-11-27 14:11:29.280043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 079532ec-d8cd-4298-9f52-0bd0273487ff '!=' 079532ec-d8cd-4298-9f52-0bd0273487ff ']' 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62376 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62376 ']' 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62376 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62376 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62376' 00:11:58.443 killing process with pid 62376 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62376 00:11:58.443 [2024-11-27 14:11:29.366182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.443 [2024-11-27 14:11:29.366350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.443 14:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62376 00:11:58.443 [2024-11-27 14:11:29.366440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.443 [2024-11-27 14:11:29.366456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:58.702 [2024-11-27 14:11:29.593192] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.082 14:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:00.082 00:12:00.082 real 0m4.733s 00:12:00.082 user 0m6.685s 00:12:00.082 sys 0m0.767s 00:12:00.082 14:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.082 14:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.082 ************************************ 00:12:00.082 END TEST raid_superblock_test 00:12:00.082 ************************************ 00:12:00.082 14:11:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:12:00.082 14:11:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:00.082 14:11:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.082 14:11:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.082 ************************************ 00:12:00.082 START TEST raid_read_error_test 00:12:00.082 ************************************ 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tc9x0oMcAU 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62589 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62589 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62589 ']' 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.082 14:11:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.082 [2024-11-27 14:11:30.923976] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:00.082 [2024-11-27 14:11:30.924221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62589 ] 00:12:00.342 [2024-11-27 14:11:31.104912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.342 [2024-11-27 14:11:31.265522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.602 [2024-11-27 14:11:31.514242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.602 [2024-11-27 14:11:31.514396] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.861 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.861 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:00.861 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:00.861 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:00.861 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.861 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.121 BaseBdev1_malloc 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.121 true 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.121 [2024-11-27 14:11:31.862553] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:01.121 [2024-11-27 14:11:31.862611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.121 [2024-11-27 14:11:31.862633] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:01.121 [2024-11-27 14:11:31.862644] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.121 [2024-11-27 14:11:31.864879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.121 [2024-11-27 14:11:31.864981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:01.121 BaseBdev1 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.121 BaseBdev2_malloc 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.121 true 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.121 [2024-11-27 14:11:31.928781] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:01.121 [2024-11-27 14:11:31.928843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.121 [2024-11-27 14:11:31.928879] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:01.121 [2024-11-27 14:11:31.928891] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.121 [2024-11-27 14:11:31.931290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.121 [2024-11-27 14:11:31.931334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:01.121 BaseBdev2 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.121 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.121 [2024-11-27 14:11:31.940823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.121 [2024-11-27 14:11:31.942893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.121 [2024-11-27 14:11:31.943095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:01.121 [2024-11-27 14:11:31.943112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:01.121 [2024-11-27 14:11:31.943380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:01.122 [2024-11-27 14:11:31.943550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:01.122 [2024-11-27 14:11:31.943563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:01.122 [2024-11-27 14:11:31.943721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.122 14:11:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.122 "name": "raid_bdev1", 00:12:01.122 "uuid": "cd08f28c-c02f-4efc-a6a5-c216f7b29db1", 00:12:01.122 "strip_size_kb": 64, 00:12:01.122 "state": "online", 00:12:01.122 "raid_level": "concat", 00:12:01.122 "superblock": true, 00:12:01.122 "num_base_bdevs": 2, 00:12:01.122 "num_base_bdevs_discovered": 2, 00:12:01.122 "num_base_bdevs_operational": 2, 00:12:01.122 "base_bdevs_list": [ 00:12:01.122 { 00:12:01.122 "name": "BaseBdev1", 00:12:01.122 "uuid": "aef6a537-6989-5329-acd8-33e364688063", 00:12:01.122 "is_configured": true, 00:12:01.122 "data_offset": 2048, 00:12:01.122 "data_size": 63488 00:12:01.122 }, 00:12:01.122 { 00:12:01.122 "name": "BaseBdev2", 00:12:01.122 "uuid": "4880c977-699d-59fb-988f-fb46a6ae0190", 00:12:01.122 "is_configured": true, 00:12:01.122 "data_offset": 2048, 00:12:01.122 "data_size": 63488 00:12:01.122 } 00:12:01.122 ] 00:12:01.122 }' 00:12:01.122 14:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.122 14:11:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.691 14:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:01.691 14:11:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:01.691 [2024-11-27 14:11:32.525449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.728 "name": "raid_bdev1", 00:12:02.728 "uuid": "cd08f28c-c02f-4efc-a6a5-c216f7b29db1", 00:12:02.728 "strip_size_kb": 64, 00:12:02.728 "state": "online", 00:12:02.728 "raid_level": "concat", 00:12:02.728 "superblock": true, 00:12:02.728 "num_base_bdevs": 2, 00:12:02.728 "num_base_bdevs_discovered": 2, 00:12:02.728 "num_base_bdevs_operational": 2, 00:12:02.728 "base_bdevs_list": [ 00:12:02.728 { 00:12:02.728 "name": "BaseBdev1", 00:12:02.728 "uuid": "aef6a537-6989-5329-acd8-33e364688063", 00:12:02.728 "is_configured": true, 00:12:02.728 "data_offset": 2048, 00:12:02.728 "data_size": 63488 00:12:02.728 }, 00:12:02.728 { 00:12:02.728 "name": "BaseBdev2", 00:12:02.728 "uuid": "4880c977-699d-59fb-988f-fb46a6ae0190", 00:12:02.728 "is_configured": true, 00:12:02.728 "data_offset": 2048, 00:12:02.728 "data_size": 63488 00:12:02.728 } 00:12:02.728 ] 00:12:02.728 }' 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.728 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.987 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:02.987 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.987 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.987 [2024-11-27 14:11:33.914225] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:02.987 [2024-11-27 14:11:33.914270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:02.987 [2024-11-27 14:11:33.917674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.987 [2024-11-27 14:11:33.917723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.987 [2024-11-27 14:11:33.917754] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.987 [2024-11-27 14:11:33.917791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:02.987 { 00:12:02.987 "results": [ 00:12:02.987 { 00:12:02.987 "job": "raid_bdev1", 00:12:02.987 "core_mask": "0x1", 00:12:02.987 "workload": "randrw", 00:12:02.987 "percentage": 50, 00:12:02.987 "status": "finished", 00:12:02.987 "queue_depth": 1, 00:12:02.987 "io_size": 131072, 00:12:02.987 "runtime": 1.389763, 00:12:02.987 "iops": 14316.110013002217, 00:12:02.987 "mibps": 1789.513751625277, 00:12:02.987 "io_failed": 1, 00:12:02.987 "io_timeout": 0, 00:12:02.987 "avg_latency_us": 96.40604150677298, 00:12:02.987 "min_latency_us": 26.606113537117903, 00:12:02.987 "max_latency_us": 1445.2262008733624 00:12:02.987 } 00:12:02.987 ], 00:12:02.987 "core_count": 1 00:12:02.987 } 00:12:02.987 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.987 14:11:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62589 00:12:02.987 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62589 ']' 00:12:02.987 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62589 00:12:02.987 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:02.987 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:02.987 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62589 00:12:03.246 killing process with pid 62589 00:12:03.246 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.246 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.246 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62589' 00:12:03.246 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62589 00:12:03.246 [2024-11-27 14:11:33.961436] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.246 14:11:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62589 00:12:03.246 [2024-11-27 14:11:34.101204] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.624 14:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:04.624 14:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tc9x0oMcAU 00:12:04.624 14:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:04.624 ************************************ 00:12:04.624 END TEST raid_read_error_test 00:12:04.624 ************************************ 00:12:04.624 14:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:04.624 14:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:04.624 14:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.624 14:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:04.624 14:11:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:04.624 00:12:04.624 real 0m4.510s 00:12:04.624 user 0m5.474s 00:12:04.624 sys 0m0.527s 00:12:04.624 14:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.624 14:11:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.624 14:11:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:12:04.624 14:11:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:04.624 14:11:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.624 14:11:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.624 ************************************ 00:12:04.624 START TEST raid_write_error_test 00:12:04.624 ************************************ 00:12:04.624 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:12:04.624 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:04.624 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:04.624 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:04.624 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:04.624 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nAaHNbelab 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62729 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62729 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62729 ']' 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.625 14:11:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.625 [2024-11-27 14:11:35.517131] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:04.625 [2024-11-27 14:11:35.517271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62729 ] 00:12:04.884 [2024-11-27 14:11:35.675412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.884 [2024-11-27 14:11:35.800372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.142 [2024-11-27 14:11:36.005099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.142 [2024-11-27 14:11:36.005181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.711 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.711 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 BaseBdev1_malloc 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 true 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 [2024-11-27 14:11:36.427143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:05.712 [2024-11-27 14:11:36.427201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.712 [2024-11-27 14:11:36.427226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:05.712 [2024-11-27 14:11:36.427237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.712 [2024-11-27 14:11:36.429554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.712 [2024-11-27 14:11:36.429593] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.712 BaseBdev1 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 BaseBdev2_malloc 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 true 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 [2024-11-27 14:11:36.495365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:05.712 [2024-11-27 14:11:36.495426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.712 [2024-11-27 14:11:36.495462] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:05.712 [2024-11-27 14:11:36.495475] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.712 [2024-11-27 14:11:36.497902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.712 [2024-11-27 14:11:36.497942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:05.712 BaseBdev2 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 [2024-11-27 14:11:36.507403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.712 [2024-11-27 14:11:36.509412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.712 [2024-11-27 14:11:36.509616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:05.712 [2024-11-27 14:11:36.509633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:05.712 [2024-11-27 14:11:36.509930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:05.712 [2024-11-27 14:11:36.510156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:05.712 [2024-11-27 14:11:36.510179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:05.712 [2024-11-27 14:11:36.510370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.712 "name": "raid_bdev1", 00:12:05.712 "uuid": "455b5d70-04db-4ea5-bd55-2d8b366ac58b", 00:12:05.712 "strip_size_kb": 64, 00:12:05.712 "state": "online", 00:12:05.712 "raid_level": "concat", 00:12:05.712 "superblock": true, 00:12:05.712 "num_base_bdevs": 2, 00:12:05.712 "num_base_bdevs_discovered": 2, 00:12:05.712 "num_base_bdevs_operational": 2, 00:12:05.712 "base_bdevs_list": [ 00:12:05.712 { 00:12:05.712 "name": "BaseBdev1", 00:12:05.712 "uuid": "c4daf2d2-1167-5873-87f2-71f57061b132", 00:12:05.712 "is_configured": true, 00:12:05.712 "data_offset": 2048, 00:12:05.712 "data_size": 63488 00:12:05.712 }, 00:12:05.712 { 00:12:05.712 "name": "BaseBdev2", 00:12:05.712 "uuid": "42c76468-2f8f-53a0-8cb6-f4f3a485faa0", 00:12:05.712 "is_configured": true, 00:12:05.712 "data_offset": 2048, 00:12:05.712 "data_size": 63488 00:12:05.712 } 00:12:05.712 ] 00:12:05.712 }' 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.712 14:11:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.288 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:06.288 14:11:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:06.288 [2024-11-27 14:11:37.059811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:07.231 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:07.231 14:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.231 14:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.231 14:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.231 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:07.231 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:07.231 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:07.231 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:07.231 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.231 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.232 14:11:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.232 14:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.232 "name": "raid_bdev1", 00:12:07.232 "uuid": "455b5d70-04db-4ea5-bd55-2d8b366ac58b", 00:12:07.232 "strip_size_kb": 64, 00:12:07.232 "state": "online", 00:12:07.232 "raid_level": "concat", 00:12:07.232 "superblock": true, 00:12:07.232 "num_base_bdevs": 2, 00:12:07.232 "num_base_bdevs_discovered": 2, 00:12:07.232 "num_base_bdevs_operational": 2, 00:12:07.232 "base_bdevs_list": [ 00:12:07.232 { 00:12:07.232 "name": "BaseBdev1", 00:12:07.232 "uuid": "c4daf2d2-1167-5873-87f2-71f57061b132", 00:12:07.232 "is_configured": true, 00:12:07.232 "data_offset": 2048, 00:12:07.232 "data_size": 63488 00:12:07.232 }, 00:12:07.232 { 00:12:07.232 "name": "BaseBdev2", 00:12:07.232 "uuid": "42c76468-2f8f-53a0-8cb6-f4f3a485faa0", 00:12:07.232 "is_configured": true, 00:12:07.232 "data_offset": 2048, 00:12:07.232 "data_size": 63488 00:12:07.232 } 00:12:07.232 ] 00:12:07.232 }' 00:12:07.232 14:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.232 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.492 14:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.492 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.492 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.492 [2024-11-27 14:11:38.420020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.492 [2024-11-27 14:11:38.420062] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.492 [2024-11-27 14:11:38.423067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.492 [2024-11-27 14:11:38.423145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.492 [2024-11-27 14:11:38.423183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.492 [2024-11-27 14:11:38.423196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:07.492 { 00:12:07.492 "results": [ 00:12:07.492 { 00:12:07.492 "job": "raid_bdev1", 00:12:07.492 "core_mask": "0x1", 00:12:07.492 "workload": "randrw", 00:12:07.492 "percentage": 50, 00:12:07.492 "status": "finished", 00:12:07.492 "queue_depth": 1, 00:12:07.492 "io_size": 131072, 00:12:07.492 "runtime": 1.361192, 00:12:07.492 "iops": 14759.857536629659, 00:12:07.492 "mibps": 1844.9821920787074, 00:12:07.492 "io_failed": 1, 00:12:07.492 "io_timeout": 0, 00:12:07.492 "avg_latency_us": 93.72408180013858, 00:12:07.492 "min_latency_us": 26.606113537117903, 00:12:07.492 "max_latency_us": 1624.0908296943232 00:12:07.492 } 00:12:07.492 ], 00:12:07.492 "core_count": 1 00:12:07.492 } 00:12:07.492 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.492 14:11:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62729 00:12:07.492 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62729 ']' 00:12:07.492 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62729 00:12:07.492 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:07.492 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.492 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62729 00:12:07.752 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.752 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.752 killing process with pid 62729 00:12:07.752 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62729' 00:12:07.752 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62729 00:12:07.752 [2024-11-27 14:11:38.472148] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.752 14:11:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62729 00:12:07.752 [2024-11-27 14:11:38.620511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.129 14:11:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nAaHNbelab 00:12:09.129 14:11:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:09.129 14:11:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:09.129 14:11:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:09.129 14:11:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:09.129 14:11:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.129 14:11:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:09.129 14:11:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:09.129 00:12:09.129 real 0m4.501s 00:12:09.129 user 0m5.422s 00:12:09.129 sys 0m0.534s 00:12:09.129 14:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.129 14:11:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.129 ************************************ 00:12:09.129 END TEST raid_write_error_test 00:12:09.129 ************************************ 00:12:09.129 14:11:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:09.129 14:11:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:12:09.129 14:11:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:09.129 14:11:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.129 14:11:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.129 ************************************ 00:12:09.129 START TEST raid_state_function_test 00:12:09.129 ************************************ 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62873 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62873' 00:12:09.129 Process raid pid: 62873 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62873 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62873 ']' 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.129 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.388 [2024-11-27 14:11:40.084291] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:09.388 [2024-11-27 14:11:40.084455] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.388 [2024-11-27 14:11:40.261040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.647 [2024-11-27 14:11:40.380340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.647 [2024-11-27 14:11:40.599063] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.647 [2024-11-27 14:11:40.599127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.216 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.216 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.217 [2024-11-27 14:11:40.937471] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.217 [2024-11-27 14:11:40.937539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.217 [2024-11-27 14:11:40.937551] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.217 [2024-11-27 14:11:40.937563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.217 "name": "Existed_Raid", 00:12:10.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.217 "strip_size_kb": 0, 00:12:10.217 "state": "configuring", 00:12:10.217 "raid_level": "raid1", 00:12:10.217 "superblock": false, 00:12:10.217 "num_base_bdevs": 2, 00:12:10.217 "num_base_bdevs_discovered": 0, 00:12:10.217 "num_base_bdevs_operational": 2, 00:12:10.217 "base_bdevs_list": [ 00:12:10.217 { 00:12:10.217 "name": "BaseBdev1", 00:12:10.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.217 "is_configured": false, 00:12:10.217 "data_offset": 0, 00:12:10.217 "data_size": 0 00:12:10.217 }, 00:12:10.217 { 00:12:10.217 "name": "BaseBdev2", 00:12:10.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.217 "is_configured": false, 00:12:10.217 "data_offset": 0, 00:12:10.217 "data_size": 0 00:12:10.217 } 00:12:10.217 ] 00:12:10.217 }' 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.217 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.475 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.475 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.475 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.475 [2024-11-27 14:11:41.400654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.475 [2024-11-27 14:11:41.400701] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:10.475 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.475 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:10.475 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.475 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.475 [2024-11-27 14:11:41.412652] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.475 [2024-11-27 14:11:41.412708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.475 [2024-11-27 14:11:41.412721] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.475 [2024-11-27 14:11:41.412734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.475 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.475 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:10.475 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.475 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.733 [2024-11-27 14:11:41.462339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.733 BaseBdev1 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.733 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.733 [ 00:12:10.733 { 00:12:10.733 "name": "BaseBdev1", 00:12:10.733 "aliases": [ 00:12:10.733 "2cb98583-80a8-4a28-b11c-fe0df8354cc9" 00:12:10.733 ], 00:12:10.733 "product_name": "Malloc disk", 00:12:10.733 "block_size": 512, 00:12:10.733 "num_blocks": 65536, 00:12:10.733 "uuid": "2cb98583-80a8-4a28-b11c-fe0df8354cc9", 00:12:10.733 "assigned_rate_limits": { 00:12:10.733 "rw_ios_per_sec": 0, 00:12:10.733 "rw_mbytes_per_sec": 0, 00:12:10.734 "r_mbytes_per_sec": 0, 00:12:10.734 "w_mbytes_per_sec": 0 00:12:10.734 }, 00:12:10.734 "claimed": true, 00:12:10.734 "claim_type": "exclusive_write", 00:12:10.734 "zoned": false, 00:12:10.734 "supported_io_types": { 00:12:10.734 "read": true, 00:12:10.734 "write": true, 00:12:10.734 "unmap": true, 00:12:10.734 "flush": true, 00:12:10.734 "reset": true, 00:12:10.734 "nvme_admin": false, 00:12:10.734 "nvme_io": false, 00:12:10.734 "nvme_io_md": false, 00:12:10.734 "write_zeroes": true, 00:12:10.734 "zcopy": true, 00:12:10.734 "get_zone_info": false, 00:12:10.734 "zone_management": false, 00:12:10.734 "zone_append": false, 00:12:10.734 "compare": false, 00:12:10.734 "compare_and_write": false, 00:12:10.734 "abort": true, 00:12:10.734 "seek_hole": false, 00:12:10.734 "seek_data": false, 00:12:10.734 "copy": true, 00:12:10.734 "nvme_iov_md": false 00:12:10.734 }, 00:12:10.734 "memory_domains": [ 00:12:10.734 { 00:12:10.734 "dma_device_id": "system", 00:12:10.734 "dma_device_type": 1 00:12:10.734 }, 00:12:10.734 { 00:12:10.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.734 "dma_device_type": 2 00:12:10.734 } 00:12:10.734 ], 00:12:10.734 "driver_specific": {} 00:12:10.734 } 00:12:10.734 ] 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.734 "name": "Existed_Raid", 00:12:10.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.734 "strip_size_kb": 0, 00:12:10.734 "state": "configuring", 00:12:10.734 "raid_level": "raid1", 00:12:10.734 "superblock": false, 00:12:10.734 "num_base_bdevs": 2, 00:12:10.734 "num_base_bdevs_discovered": 1, 00:12:10.734 "num_base_bdevs_operational": 2, 00:12:10.734 "base_bdevs_list": [ 00:12:10.734 { 00:12:10.734 "name": "BaseBdev1", 00:12:10.734 "uuid": "2cb98583-80a8-4a28-b11c-fe0df8354cc9", 00:12:10.734 "is_configured": true, 00:12:10.734 "data_offset": 0, 00:12:10.734 "data_size": 65536 00:12:10.734 }, 00:12:10.734 { 00:12:10.734 "name": "BaseBdev2", 00:12:10.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.734 "is_configured": false, 00:12:10.734 "data_offset": 0, 00:12:10.734 "data_size": 0 00:12:10.734 } 00:12:10.734 ] 00:12:10.734 }' 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.734 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.301 [2024-11-27 14:11:41.977558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.301 [2024-11-27 14:11:41.977636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.301 [2024-11-27 14:11:41.989595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.301 [2024-11-27 14:11:41.991658] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:11.301 [2024-11-27 14:11:41.991708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.301 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.301 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.301 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.301 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.301 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.301 "name": "Existed_Raid", 00:12:11.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.301 "strip_size_kb": 0, 00:12:11.301 "state": "configuring", 00:12:11.301 "raid_level": "raid1", 00:12:11.301 "superblock": false, 00:12:11.301 "num_base_bdevs": 2, 00:12:11.301 "num_base_bdevs_discovered": 1, 00:12:11.301 "num_base_bdevs_operational": 2, 00:12:11.301 "base_bdevs_list": [ 00:12:11.301 { 00:12:11.301 "name": "BaseBdev1", 00:12:11.301 "uuid": "2cb98583-80a8-4a28-b11c-fe0df8354cc9", 00:12:11.301 "is_configured": true, 00:12:11.301 "data_offset": 0, 00:12:11.301 "data_size": 65536 00:12:11.301 }, 00:12:11.301 { 00:12:11.301 "name": "BaseBdev2", 00:12:11.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.301 "is_configured": false, 00:12:11.301 "data_offset": 0, 00:12:11.301 "data_size": 0 00:12:11.301 } 00:12:11.301 ] 00:12:11.301 }' 00:12:11.301 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.301 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.560 [2024-11-27 14:11:42.493260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:11.560 [2024-11-27 14:11:42.493336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:11.560 [2024-11-27 14:11:42.493344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:11.560 [2024-11-27 14:11:42.493613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:11.560 [2024-11-27 14:11:42.493797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:11.560 [2024-11-27 14:11:42.493811] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:11.560 [2024-11-27 14:11:42.494113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.560 BaseBdev2 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.560 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.820 [ 00:12:11.820 { 00:12:11.820 "name": "BaseBdev2", 00:12:11.820 "aliases": [ 00:12:11.820 "c1e10d05-9a00-4df3-93a9-e1931d64038e" 00:12:11.820 ], 00:12:11.820 "product_name": "Malloc disk", 00:12:11.820 "block_size": 512, 00:12:11.820 "num_blocks": 65536, 00:12:11.820 "uuid": "c1e10d05-9a00-4df3-93a9-e1931d64038e", 00:12:11.820 "assigned_rate_limits": { 00:12:11.820 "rw_ios_per_sec": 0, 00:12:11.820 "rw_mbytes_per_sec": 0, 00:12:11.820 "r_mbytes_per_sec": 0, 00:12:11.820 "w_mbytes_per_sec": 0 00:12:11.820 }, 00:12:11.820 "claimed": true, 00:12:11.820 "claim_type": "exclusive_write", 00:12:11.820 "zoned": false, 00:12:11.820 "supported_io_types": { 00:12:11.820 "read": true, 00:12:11.820 "write": true, 00:12:11.820 "unmap": true, 00:12:11.820 "flush": true, 00:12:11.820 "reset": true, 00:12:11.820 "nvme_admin": false, 00:12:11.820 "nvme_io": false, 00:12:11.820 "nvme_io_md": false, 00:12:11.820 "write_zeroes": true, 00:12:11.820 "zcopy": true, 00:12:11.820 "get_zone_info": false, 00:12:11.820 "zone_management": false, 00:12:11.820 "zone_append": false, 00:12:11.820 "compare": false, 00:12:11.820 "compare_and_write": false, 00:12:11.820 "abort": true, 00:12:11.820 "seek_hole": false, 00:12:11.820 "seek_data": false, 00:12:11.820 "copy": true, 00:12:11.820 "nvme_iov_md": false 00:12:11.820 }, 00:12:11.820 "memory_domains": [ 00:12:11.820 { 00:12:11.820 "dma_device_id": "system", 00:12:11.820 "dma_device_type": 1 00:12:11.820 }, 00:12:11.820 { 00:12:11.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.820 "dma_device_type": 2 00:12:11.820 } 00:12:11.820 ], 00:12:11.820 "driver_specific": {} 00:12:11.820 } 00:12:11.820 ] 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.820 "name": "Existed_Raid", 00:12:11.820 "uuid": "b48cf7b9-0649-43d1-a512-650bd40f0216", 00:12:11.820 "strip_size_kb": 0, 00:12:11.820 "state": "online", 00:12:11.820 "raid_level": "raid1", 00:12:11.820 "superblock": false, 00:12:11.820 "num_base_bdevs": 2, 00:12:11.820 "num_base_bdevs_discovered": 2, 00:12:11.820 "num_base_bdevs_operational": 2, 00:12:11.820 "base_bdevs_list": [ 00:12:11.820 { 00:12:11.820 "name": "BaseBdev1", 00:12:11.820 "uuid": "2cb98583-80a8-4a28-b11c-fe0df8354cc9", 00:12:11.820 "is_configured": true, 00:12:11.820 "data_offset": 0, 00:12:11.820 "data_size": 65536 00:12:11.820 }, 00:12:11.820 { 00:12:11.820 "name": "BaseBdev2", 00:12:11.820 "uuid": "c1e10d05-9a00-4df3-93a9-e1931d64038e", 00:12:11.820 "is_configured": true, 00:12:11.820 "data_offset": 0, 00:12:11.820 "data_size": 65536 00:12:11.820 } 00:12:11.820 ] 00:12:11.820 }' 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.820 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.080 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.080 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.080 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.080 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.080 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.080 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.080 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.080 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.080 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.080 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.080 [2024-11-27 14:11:43.020760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.340 "name": "Existed_Raid", 00:12:12.340 "aliases": [ 00:12:12.340 "b48cf7b9-0649-43d1-a512-650bd40f0216" 00:12:12.340 ], 00:12:12.340 "product_name": "Raid Volume", 00:12:12.340 "block_size": 512, 00:12:12.340 "num_blocks": 65536, 00:12:12.340 "uuid": "b48cf7b9-0649-43d1-a512-650bd40f0216", 00:12:12.340 "assigned_rate_limits": { 00:12:12.340 "rw_ios_per_sec": 0, 00:12:12.340 "rw_mbytes_per_sec": 0, 00:12:12.340 "r_mbytes_per_sec": 0, 00:12:12.340 "w_mbytes_per_sec": 0 00:12:12.340 }, 00:12:12.340 "claimed": false, 00:12:12.340 "zoned": false, 00:12:12.340 "supported_io_types": { 00:12:12.340 "read": true, 00:12:12.340 "write": true, 00:12:12.340 "unmap": false, 00:12:12.340 "flush": false, 00:12:12.340 "reset": true, 00:12:12.340 "nvme_admin": false, 00:12:12.340 "nvme_io": false, 00:12:12.340 "nvme_io_md": false, 00:12:12.340 "write_zeroes": true, 00:12:12.340 "zcopy": false, 00:12:12.340 "get_zone_info": false, 00:12:12.340 "zone_management": false, 00:12:12.340 "zone_append": false, 00:12:12.340 "compare": false, 00:12:12.340 "compare_and_write": false, 00:12:12.340 "abort": false, 00:12:12.340 "seek_hole": false, 00:12:12.340 "seek_data": false, 00:12:12.340 "copy": false, 00:12:12.340 "nvme_iov_md": false 00:12:12.340 }, 00:12:12.340 "memory_domains": [ 00:12:12.340 { 00:12:12.340 "dma_device_id": "system", 00:12:12.340 "dma_device_type": 1 00:12:12.340 }, 00:12:12.340 { 00:12:12.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.340 "dma_device_type": 2 00:12:12.340 }, 00:12:12.340 { 00:12:12.340 "dma_device_id": "system", 00:12:12.340 "dma_device_type": 1 00:12:12.340 }, 00:12:12.340 { 00:12:12.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.340 "dma_device_type": 2 00:12:12.340 } 00:12:12.340 ], 00:12:12.340 "driver_specific": { 00:12:12.340 "raid": { 00:12:12.340 "uuid": "b48cf7b9-0649-43d1-a512-650bd40f0216", 00:12:12.340 "strip_size_kb": 0, 00:12:12.340 "state": "online", 00:12:12.340 "raid_level": "raid1", 00:12:12.340 "superblock": false, 00:12:12.340 "num_base_bdevs": 2, 00:12:12.340 "num_base_bdevs_discovered": 2, 00:12:12.340 "num_base_bdevs_operational": 2, 00:12:12.340 "base_bdevs_list": [ 00:12:12.340 { 00:12:12.340 "name": "BaseBdev1", 00:12:12.340 "uuid": "2cb98583-80a8-4a28-b11c-fe0df8354cc9", 00:12:12.340 "is_configured": true, 00:12:12.340 "data_offset": 0, 00:12:12.340 "data_size": 65536 00:12:12.340 }, 00:12:12.340 { 00:12:12.340 "name": "BaseBdev2", 00:12:12.340 "uuid": "c1e10d05-9a00-4df3-93a9-e1931d64038e", 00:12:12.340 "is_configured": true, 00:12:12.340 "data_offset": 0, 00:12:12.340 "data_size": 65536 00:12:12.340 } 00:12:12.340 ] 00:12:12.340 } 00:12:12.340 } 00:12:12.340 }' 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:12.340 BaseBdev2' 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.340 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.341 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.341 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.341 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.341 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.341 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.341 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.341 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:12.341 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.341 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.341 [2024-11-27 14:11:43.264329] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.601 "name": "Existed_Raid", 00:12:12.601 "uuid": "b48cf7b9-0649-43d1-a512-650bd40f0216", 00:12:12.601 "strip_size_kb": 0, 00:12:12.601 "state": "online", 00:12:12.601 "raid_level": "raid1", 00:12:12.601 "superblock": false, 00:12:12.601 "num_base_bdevs": 2, 00:12:12.601 "num_base_bdevs_discovered": 1, 00:12:12.601 "num_base_bdevs_operational": 1, 00:12:12.601 "base_bdevs_list": [ 00:12:12.601 { 00:12:12.601 "name": null, 00:12:12.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.601 "is_configured": false, 00:12:12.601 "data_offset": 0, 00:12:12.601 "data_size": 65536 00:12:12.601 }, 00:12:12.601 { 00:12:12.601 "name": "BaseBdev2", 00:12:12.601 "uuid": "c1e10d05-9a00-4df3-93a9-e1931d64038e", 00:12:12.601 "is_configured": true, 00:12:12.601 "data_offset": 0, 00:12:12.601 "data_size": 65536 00:12:12.601 } 00:12:12.601 ] 00:12:12.601 }' 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.601 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.862 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:12.862 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:12.862 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.862 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.862 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.862 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:12.862 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.120 [2024-11-27 14:11:43.837074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:13.120 [2024-11-27 14:11:43.837202] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.120 [2024-11-27 14:11:43.941618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.120 [2024-11-27 14:11:43.941682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.120 [2024-11-27 14:11:43.941696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62873 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62873 ']' 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62873 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:13.120 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.121 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62873 00:12:13.121 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.121 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.121 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62873' 00:12:13.121 killing process with pid 62873 00:12:13.121 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62873 00:12:13.121 [2024-11-27 14:11:44.038375] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:13.121 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62873 00:12:13.121 [2024-11-27 14:11:44.058483] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:14.501 00:12:14.501 real 0m5.290s 00:12:14.501 user 0m7.596s 00:12:14.501 sys 0m0.870s 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 ************************************ 00:12:14.501 END TEST raid_state_function_test 00:12:14.501 ************************************ 00:12:14.501 14:11:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:12:14.501 14:11:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:14.501 14:11:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.501 14:11:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 ************************************ 00:12:14.501 START TEST raid_state_function_test_sb 00:12:14.501 ************************************ 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63126 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63126' 00:12:14.501 Process raid pid: 63126 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63126 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63126 ']' 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.501 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 [2024-11-27 14:11:45.437044] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:14.501 [2024-11-27 14:11:45.437258] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.760 [2024-11-27 14:11:45.619699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.020 [2024-11-27 14:11:45.756074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.279 [2024-11-27 14:11:45.985617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.279 [2024-11-27 14:11:45.985760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.539 [2024-11-27 14:11:46.318481] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:15.539 [2024-11-27 14:11:46.318549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:15.539 [2024-11-27 14:11:46.318562] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.539 [2024-11-27 14:11:46.318573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.539 "name": "Existed_Raid", 00:12:15.539 "uuid": "dae0bede-46a6-485e-b6fb-9f4deec68caf", 00:12:15.539 "strip_size_kb": 0, 00:12:15.539 "state": "configuring", 00:12:15.539 "raid_level": "raid1", 00:12:15.539 "superblock": true, 00:12:15.539 "num_base_bdevs": 2, 00:12:15.539 "num_base_bdevs_discovered": 0, 00:12:15.539 "num_base_bdevs_operational": 2, 00:12:15.539 "base_bdevs_list": [ 00:12:15.539 { 00:12:15.539 "name": "BaseBdev1", 00:12:15.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.539 "is_configured": false, 00:12:15.539 "data_offset": 0, 00:12:15.539 "data_size": 0 00:12:15.539 }, 00:12:15.539 { 00:12:15.539 "name": "BaseBdev2", 00:12:15.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.539 "is_configured": false, 00:12:15.539 "data_offset": 0, 00:12:15.539 "data_size": 0 00:12:15.539 } 00:12:15.539 ] 00:12:15.539 }' 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.539 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.798 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:15.798 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.798 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.798 [2024-11-27 14:11:46.741695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.798 [2024-11-27 14:11:46.741803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:15.798 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.798 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:15.798 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.798 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.058 [2024-11-27 14:11:46.753698] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:16.058 [2024-11-27 14:11:46.753826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:16.058 [2024-11-27 14:11:46.753875] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.058 [2024-11-27 14:11:46.753922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.058 [2024-11-27 14:11:46.806787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.058 BaseBdev1 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.058 [ 00:12:16.058 { 00:12:16.058 "name": "BaseBdev1", 00:12:16.058 "aliases": [ 00:12:16.058 "b3526a7c-e021-4175-a841-071b25b197b5" 00:12:16.058 ], 00:12:16.058 "product_name": "Malloc disk", 00:12:16.058 "block_size": 512, 00:12:16.058 "num_blocks": 65536, 00:12:16.058 "uuid": "b3526a7c-e021-4175-a841-071b25b197b5", 00:12:16.058 "assigned_rate_limits": { 00:12:16.058 "rw_ios_per_sec": 0, 00:12:16.058 "rw_mbytes_per_sec": 0, 00:12:16.058 "r_mbytes_per_sec": 0, 00:12:16.058 "w_mbytes_per_sec": 0 00:12:16.058 }, 00:12:16.058 "claimed": true, 00:12:16.058 "claim_type": "exclusive_write", 00:12:16.058 "zoned": false, 00:12:16.058 "supported_io_types": { 00:12:16.058 "read": true, 00:12:16.058 "write": true, 00:12:16.058 "unmap": true, 00:12:16.058 "flush": true, 00:12:16.058 "reset": true, 00:12:16.058 "nvme_admin": false, 00:12:16.058 "nvme_io": false, 00:12:16.058 "nvme_io_md": false, 00:12:16.058 "write_zeroes": true, 00:12:16.058 "zcopy": true, 00:12:16.058 "get_zone_info": false, 00:12:16.058 "zone_management": false, 00:12:16.058 "zone_append": false, 00:12:16.058 "compare": false, 00:12:16.058 "compare_and_write": false, 00:12:16.058 "abort": true, 00:12:16.058 "seek_hole": false, 00:12:16.058 "seek_data": false, 00:12:16.058 "copy": true, 00:12:16.058 "nvme_iov_md": false 00:12:16.058 }, 00:12:16.058 "memory_domains": [ 00:12:16.058 { 00:12:16.058 "dma_device_id": "system", 00:12:16.058 "dma_device_type": 1 00:12:16.058 }, 00:12:16.058 { 00:12:16.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.058 "dma_device_type": 2 00:12:16.058 } 00:12:16.058 ], 00:12:16.058 "driver_specific": {} 00:12:16.058 } 00:12:16.058 ] 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.058 "name": "Existed_Raid", 00:12:16.058 "uuid": "9ab8068f-cd7b-4d51-baef-d47c98ae1f93", 00:12:16.058 "strip_size_kb": 0, 00:12:16.058 "state": "configuring", 00:12:16.058 "raid_level": "raid1", 00:12:16.058 "superblock": true, 00:12:16.058 "num_base_bdevs": 2, 00:12:16.058 "num_base_bdevs_discovered": 1, 00:12:16.058 "num_base_bdevs_operational": 2, 00:12:16.058 "base_bdevs_list": [ 00:12:16.058 { 00:12:16.058 "name": "BaseBdev1", 00:12:16.058 "uuid": "b3526a7c-e021-4175-a841-071b25b197b5", 00:12:16.058 "is_configured": true, 00:12:16.058 "data_offset": 2048, 00:12:16.058 "data_size": 63488 00:12:16.058 }, 00:12:16.058 { 00:12:16.058 "name": "BaseBdev2", 00:12:16.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.058 "is_configured": false, 00:12:16.058 "data_offset": 0, 00:12:16.058 "data_size": 0 00:12:16.058 } 00:12:16.058 ] 00:12:16.058 }' 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.058 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.318 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:16.318 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.318 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.577 [2024-11-27 14:11:47.274082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:16.577 [2024-11-27 14:11:47.274239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.577 [2024-11-27 14:11:47.286116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:16.577 [2024-11-27 14:11:47.288458] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:16.577 [2024-11-27 14:11:47.288557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.577 "name": "Existed_Raid", 00:12:16.577 "uuid": "20084c05-9352-497b-8632-ec6109a80d32", 00:12:16.577 "strip_size_kb": 0, 00:12:16.577 "state": "configuring", 00:12:16.577 "raid_level": "raid1", 00:12:16.577 "superblock": true, 00:12:16.577 "num_base_bdevs": 2, 00:12:16.577 "num_base_bdevs_discovered": 1, 00:12:16.577 "num_base_bdevs_operational": 2, 00:12:16.577 "base_bdevs_list": [ 00:12:16.577 { 00:12:16.577 "name": "BaseBdev1", 00:12:16.577 "uuid": "b3526a7c-e021-4175-a841-071b25b197b5", 00:12:16.577 "is_configured": true, 00:12:16.577 "data_offset": 2048, 00:12:16.577 "data_size": 63488 00:12:16.577 }, 00:12:16.577 { 00:12:16.577 "name": "BaseBdev2", 00:12:16.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.577 "is_configured": false, 00:12:16.577 "data_offset": 0, 00:12:16.577 "data_size": 0 00:12:16.577 } 00:12:16.577 ] 00:12:16.577 }' 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.577 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.837 [2024-11-27 14:11:47.769296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.837 [2024-11-27 14:11:47.769785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:16.837 [2024-11-27 14:11:47.769849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.837 [2024-11-27 14:11:47.770176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:16.837 BaseBdev2 00:12:16.837 [2024-11-27 14:11:47.770421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:16.837 [2024-11-27 14:11:47.770476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.837 [2024-11-27 14:11:47.770686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.837 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.096 [ 00:12:17.096 { 00:12:17.096 "name": "BaseBdev2", 00:12:17.096 "aliases": [ 00:12:17.096 "a667a637-ab53-4939-af1f-5c4543e22502" 00:12:17.096 ], 00:12:17.096 "product_name": "Malloc disk", 00:12:17.096 "block_size": 512, 00:12:17.096 "num_blocks": 65536, 00:12:17.096 "uuid": "a667a637-ab53-4939-af1f-5c4543e22502", 00:12:17.096 "assigned_rate_limits": { 00:12:17.096 "rw_ios_per_sec": 0, 00:12:17.096 "rw_mbytes_per_sec": 0, 00:12:17.096 "r_mbytes_per_sec": 0, 00:12:17.096 "w_mbytes_per_sec": 0 00:12:17.096 }, 00:12:17.096 "claimed": true, 00:12:17.096 "claim_type": "exclusive_write", 00:12:17.096 "zoned": false, 00:12:17.096 "supported_io_types": { 00:12:17.096 "read": true, 00:12:17.096 "write": true, 00:12:17.096 "unmap": true, 00:12:17.096 "flush": true, 00:12:17.096 "reset": true, 00:12:17.096 "nvme_admin": false, 00:12:17.096 "nvme_io": false, 00:12:17.096 "nvme_io_md": false, 00:12:17.096 "write_zeroes": true, 00:12:17.096 "zcopy": true, 00:12:17.096 "get_zone_info": false, 00:12:17.096 "zone_management": false, 00:12:17.096 "zone_append": false, 00:12:17.096 "compare": false, 00:12:17.096 "compare_and_write": false, 00:12:17.096 "abort": true, 00:12:17.096 "seek_hole": false, 00:12:17.096 "seek_data": false, 00:12:17.096 "copy": true, 00:12:17.096 "nvme_iov_md": false 00:12:17.096 }, 00:12:17.096 "memory_domains": [ 00:12:17.096 { 00:12:17.096 "dma_device_id": "system", 00:12:17.096 "dma_device_type": 1 00:12:17.096 }, 00:12:17.096 { 00:12:17.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.096 "dma_device_type": 2 00:12:17.096 } 00:12:17.096 ], 00:12:17.096 "driver_specific": {} 00:12:17.096 } 00:12:17.096 ] 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.096 "name": "Existed_Raid", 00:12:17.096 "uuid": "20084c05-9352-497b-8632-ec6109a80d32", 00:12:17.096 "strip_size_kb": 0, 00:12:17.096 "state": "online", 00:12:17.096 "raid_level": "raid1", 00:12:17.096 "superblock": true, 00:12:17.096 "num_base_bdevs": 2, 00:12:17.096 "num_base_bdevs_discovered": 2, 00:12:17.096 "num_base_bdevs_operational": 2, 00:12:17.096 "base_bdevs_list": [ 00:12:17.096 { 00:12:17.096 "name": "BaseBdev1", 00:12:17.096 "uuid": "b3526a7c-e021-4175-a841-071b25b197b5", 00:12:17.096 "is_configured": true, 00:12:17.096 "data_offset": 2048, 00:12:17.096 "data_size": 63488 00:12:17.096 }, 00:12:17.096 { 00:12:17.096 "name": "BaseBdev2", 00:12:17.096 "uuid": "a667a637-ab53-4939-af1f-5c4543e22502", 00:12:17.096 "is_configured": true, 00:12:17.096 "data_offset": 2048, 00:12:17.096 "data_size": 63488 00:12:17.096 } 00:12:17.096 ] 00:12:17.096 }' 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.096 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.354 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:17.354 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:17.354 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.354 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.354 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.354 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.354 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:17.354 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.354 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.354 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.354 [2024-11-27 14:11:48.272843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.354 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.613 "name": "Existed_Raid", 00:12:17.613 "aliases": [ 00:12:17.613 "20084c05-9352-497b-8632-ec6109a80d32" 00:12:17.613 ], 00:12:17.613 "product_name": "Raid Volume", 00:12:17.613 "block_size": 512, 00:12:17.613 "num_blocks": 63488, 00:12:17.613 "uuid": "20084c05-9352-497b-8632-ec6109a80d32", 00:12:17.613 "assigned_rate_limits": { 00:12:17.613 "rw_ios_per_sec": 0, 00:12:17.613 "rw_mbytes_per_sec": 0, 00:12:17.613 "r_mbytes_per_sec": 0, 00:12:17.613 "w_mbytes_per_sec": 0 00:12:17.613 }, 00:12:17.613 "claimed": false, 00:12:17.613 "zoned": false, 00:12:17.613 "supported_io_types": { 00:12:17.613 "read": true, 00:12:17.613 "write": true, 00:12:17.613 "unmap": false, 00:12:17.613 "flush": false, 00:12:17.613 "reset": true, 00:12:17.613 "nvme_admin": false, 00:12:17.613 "nvme_io": false, 00:12:17.613 "nvme_io_md": false, 00:12:17.613 "write_zeroes": true, 00:12:17.613 "zcopy": false, 00:12:17.613 "get_zone_info": false, 00:12:17.613 "zone_management": false, 00:12:17.613 "zone_append": false, 00:12:17.613 "compare": false, 00:12:17.613 "compare_and_write": false, 00:12:17.613 "abort": false, 00:12:17.613 "seek_hole": false, 00:12:17.613 "seek_data": false, 00:12:17.613 "copy": false, 00:12:17.613 "nvme_iov_md": false 00:12:17.613 }, 00:12:17.613 "memory_domains": [ 00:12:17.613 { 00:12:17.613 "dma_device_id": "system", 00:12:17.613 "dma_device_type": 1 00:12:17.613 }, 00:12:17.613 { 00:12:17.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.613 "dma_device_type": 2 00:12:17.613 }, 00:12:17.613 { 00:12:17.613 "dma_device_id": "system", 00:12:17.613 "dma_device_type": 1 00:12:17.613 }, 00:12:17.613 { 00:12:17.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.613 "dma_device_type": 2 00:12:17.613 } 00:12:17.613 ], 00:12:17.613 "driver_specific": { 00:12:17.613 "raid": { 00:12:17.613 "uuid": "20084c05-9352-497b-8632-ec6109a80d32", 00:12:17.613 "strip_size_kb": 0, 00:12:17.613 "state": "online", 00:12:17.613 "raid_level": "raid1", 00:12:17.613 "superblock": true, 00:12:17.613 "num_base_bdevs": 2, 00:12:17.613 "num_base_bdevs_discovered": 2, 00:12:17.613 "num_base_bdevs_operational": 2, 00:12:17.613 "base_bdevs_list": [ 00:12:17.613 { 00:12:17.613 "name": "BaseBdev1", 00:12:17.613 "uuid": "b3526a7c-e021-4175-a841-071b25b197b5", 00:12:17.613 "is_configured": true, 00:12:17.613 "data_offset": 2048, 00:12:17.613 "data_size": 63488 00:12:17.613 }, 00:12:17.613 { 00:12:17.613 "name": "BaseBdev2", 00:12:17.613 "uuid": "a667a637-ab53-4939-af1f-5c4543e22502", 00:12:17.613 "is_configured": true, 00:12:17.613 "data_offset": 2048, 00:12:17.613 "data_size": 63488 00:12:17.613 } 00:12:17.613 ] 00:12:17.613 } 00:12:17.613 } 00:12:17.613 }' 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:17.613 BaseBdev2' 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.613 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.613 [2024-11-27 14:11:48.520311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.882 "name": "Existed_Raid", 00:12:17.882 "uuid": "20084c05-9352-497b-8632-ec6109a80d32", 00:12:17.882 "strip_size_kb": 0, 00:12:17.882 "state": "online", 00:12:17.882 "raid_level": "raid1", 00:12:17.882 "superblock": true, 00:12:17.882 "num_base_bdevs": 2, 00:12:17.882 "num_base_bdevs_discovered": 1, 00:12:17.882 "num_base_bdevs_operational": 1, 00:12:17.882 "base_bdevs_list": [ 00:12:17.882 { 00:12:17.882 "name": null, 00:12:17.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.882 "is_configured": false, 00:12:17.882 "data_offset": 0, 00:12:17.882 "data_size": 63488 00:12:17.882 }, 00:12:17.882 { 00:12:17.882 "name": "BaseBdev2", 00:12:17.882 "uuid": "a667a637-ab53-4939-af1f-5c4543e22502", 00:12:17.882 "is_configured": true, 00:12:17.882 "data_offset": 2048, 00:12:17.882 "data_size": 63488 00:12:17.882 } 00:12:17.882 ] 00:12:17.882 }' 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.882 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.149 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.149 [2024-11-27 14:11:49.075245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.149 [2024-11-27 14:11:49.075366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.416 [2024-11-27 14:11:49.187339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.416 [2024-11-27 14:11:49.187408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.416 [2024-11-27 14:11:49.187421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63126 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63126 ']' 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63126 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63126 00:12:18.416 killing process with pid 63126 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63126' 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63126 00:12:18.416 [2024-11-27 14:11:49.282091] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.416 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63126 00:12:18.416 [2024-11-27 14:11:49.302873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.839 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:19.839 ************************************ 00:12:19.839 END TEST raid_state_function_test_sb 00:12:19.839 ************************************ 00:12:19.839 00:12:19.839 real 0m5.232s 00:12:19.839 user 0m7.415s 00:12:19.839 sys 0m0.893s 00:12:19.839 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.839 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.839 14:11:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:12:19.839 14:11:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:19.839 14:11:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.839 14:11:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.839 ************************************ 00:12:19.839 START TEST raid_superblock_test 00:12:19.839 ************************************ 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63378 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63378 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63378 ']' 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.839 14:11:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.839 [2024-11-27 14:11:50.721379] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:19.839 [2024-11-27 14:11:50.721609] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63378 ] 00:12:20.099 [2024-11-27 14:11:50.879146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.099 [2024-11-27 14:11:51.004037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.360 [2024-11-27 14:11:51.220483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.360 [2024-11-27 14:11:51.220651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.931 malloc1 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.931 [2024-11-27 14:11:51.688706] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:20.931 [2024-11-27 14:11:51.688875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.931 [2024-11-27 14:11:51.688949] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:20.931 [2024-11-27 14:11:51.688994] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.931 [2024-11-27 14:11:51.691608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.931 [2024-11-27 14:11:51.691724] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:20.931 pt1 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.931 malloc2 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.931 [2024-11-27 14:11:51.747855] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:20.931 [2024-11-27 14:11:51.747921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.931 [2024-11-27 14:11:51.747949] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:20.931 [2024-11-27 14:11:51.747960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.931 [2024-11-27 14:11:51.750337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.931 [2024-11-27 14:11:51.750452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:20.931 pt2 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.931 [2024-11-27 14:11:51.759872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:20.931 [2024-11-27 14:11:51.761864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:20.931 [2024-11-27 14:11:51.762075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:20.931 [2024-11-27 14:11:51.762116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:20.931 [2024-11-27 14:11:51.762408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:20.931 [2024-11-27 14:11:51.762587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:20.931 [2024-11-27 14:11:51.762604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:20.931 [2024-11-27 14:11:51.762783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.931 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.931 "name": "raid_bdev1", 00:12:20.931 "uuid": "5b784954-40a1-40a4-807e-8f7b0fa899da", 00:12:20.931 "strip_size_kb": 0, 00:12:20.931 "state": "online", 00:12:20.931 "raid_level": "raid1", 00:12:20.932 "superblock": true, 00:12:20.932 "num_base_bdevs": 2, 00:12:20.932 "num_base_bdevs_discovered": 2, 00:12:20.932 "num_base_bdevs_operational": 2, 00:12:20.932 "base_bdevs_list": [ 00:12:20.932 { 00:12:20.932 "name": "pt1", 00:12:20.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.932 "is_configured": true, 00:12:20.932 "data_offset": 2048, 00:12:20.932 "data_size": 63488 00:12:20.932 }, 00:12:20.932 { 00:12:20.932 "name": "pt2", 00:12:20.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.932 "is_configured": true, 00:12:20.932 "data_offset": 2048, 00:12:20.932 "data_size": 63488 00:12:20.932 } 00:12:20.932 ] 00:12:20.932 }' 00:12:20.932 14:11:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.932 14:11:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.502 [2024-11-27 14:11:52.279346] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:21.502 "name": "raid_bdev1", 00:12:21.502 "aliases": [ 00:12:21.502 "5b784954-40a1-40a4-807e-8f7b0fa899da" 00:12:21.502 ], 00:12:21.502 "product_name": "Raid Volume", 00:12:21.502 "block_size": 512, 00:12:21.502 "num_blocks": 63488, 00:12:21.502 "uuid": "5b784954-40a1-40a4-807e-8f7b0fa899da", 00:12:21.502 "assigned_rate_limits": { 00:12:21.502 "rw_ios_per_sec": 0, 00:12:21.502 "rw_mbytes_per_sec": 0, 00:12:21.502 "r_mbytes_per_sec": 0, 00:12:21.502 "w_mbytes_per_sec": 0 00:12:21.502 }, 00:12:21.502 "claimed": false, 00:12:21.502 "zoned": false, 00:12:21.502 "supported_io_types": { 00:12:21.502 "read": true, 00:12:21.502 "write": true, 00:12:21.502 "unmap": false, 00:12:21.502 "flush": false, 00:12:21.502 "reset": true, 00:12:21.502 "nvme_admin": false, 00:12:21.502 "nvme_io": false, 00:12:21.502 "nvme_io_md": false, 00:12:21.502 "write_zeroes": true, 00:12:21.502 "zcopy": false, 00:12:21.502 "get_zone_info": false, 00:12:21.502 "zone_management": false, 00:12:21.502 "zone_append": false, 00:12:21.502 "compare": false, 00:12:21.502 "compare_and_write": false, 00:12:21.502 "abort": false, 00:12:21.502 "seek_hole": false, 00:12:21.502 "seek_data": false, 00:12:21.502 "copy": false, 00:12:21.502 "nvme_iov_md": false 00:12:21.502 }, 00:12:21.502 "memory_domains": [ 00:12:21.502 { 00:12:21.502 "dma_device_id": "system", 00:12:21.502 "dma_device_type": 1 00:12:21.502 }, 00:12:21.502 { 00:12:21.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.502 "dma_device_type": 2 00:12:21.502 }, 00:12:21.502 { 00:12:21.502 "dma_device_id": "system", 00:12:21.502 "dma_device_type": 1 00:12:21.502 }, 00:12:21.502 { 00:12:21.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.502 "dma_device_type": 2 00:12:21.502 } 00:12:21.502 ], 00:12:21.502 "driver_specific": { 00:12:21.502 "raid": { 00:12:21.502 "uuid": "5b784954-40a1-40a4-807e-8f7b0fa899da", 00:12:21.502 "strip_size_kb": 0, 00:12:21.502 "state": "online", 00:12:21.502 "raid_level": "raid1", 00:12:21.502 "superblock": true, 00:12:21.502 "num_base_bdevs": 2, 00:12:21.502 "num_base_bdevs_discovered": 2, 00:12:21.502 "num_base_bdevs_operational": 2, 00:12:21.502 "base_bdevs_list": [ 00:12:21.502 { 00:12:21.502 "name": "pt1", 00:12:21.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.502 "is_configured": true, 00:12:21.502 "data_offset": 2048, 00:12:21.502 "data_size": 63488 00:12:21.502 }, 00:12:21.502 { 00:12:21.502 "name": "pt2", 00:12:21.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.502 "is_configured": true, 00:12:21.502 "data_offset": 2048, 00:12:21.502 "data_size": 63488 00:12:21.502 } 00:12:21.502 ] 00:12:21.502 } 00:12:21.502 } 00:12:21.502 }' 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:21.502 pt2' 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.502 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 [2024-11-27 14:11:52.526874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5b784954-40a1-40a4-807e-8f7b0fa899da 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5b784954-40a1-40a4-807e-8f7b0fa899da ']' 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 [2024-11-27 14:11:52.554492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:21.764 [2024-11-27 14:11:52.554523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.764 [2024-11-27 14:11:52.554621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.764 [2024-11-27 14:11:52.554685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.764 [2024-11-27 14:11:52.554699] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 [2024-11-27 14:11:52.694327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:21.764 [2024-11-27 14:11:52.696531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:21.764 [2024-11-27 14:11:52.696611] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:21.764 [2024-11-27 14:11:52.696674] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:21.764 [2024-11-27 14:11:52.696692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:21.764 [2024-11-27 14:11:52.696705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:21.764 request: 00:12:21.764 { 00:12:21.764 "name": "raid_bdev1", 00:12:21.764 "raid_level": "raid1", 00:12:21.764 "base_bdevs": [ 00:12:21.764 "malloc1", 00:12:21.764 "malloc2" 00:12:21.764 ], 00:12:21.764 "superblock": false, 00:12:21.764 "method": "bdev_raid_create", 00:12:21.764 "req_id": 1 00:12:21.764 } 00:12:21.764 Got JSON-RPC error response 00:12:21.764 response: 00:12:21.764 { 00:12:21.764 "code": -17, 00:12:21.764 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:21.764 } 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.764 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 [2024-11-27 14:11:52.762285] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:22.025 [2024-11-27 14:11:52.762455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.025 [2024-11-27 14:11:52.762514] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:22.025 [2024-11-27 14:11:52.762558] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.025 [2024-11-27 14:11:52.765228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.025 [2024-11-27 14:11:52.765345] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:22.025 [2024-11-27 14:11:52.765501] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:22.025 [2024-11-27 14:11:52.765616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:22.025 pt1 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.025 "name": "raid_bdev1", 00:12:22.025 "uuid": "5b784954-40a1-40a4-807e-8f7b0fa899da", 00:12:22.025 "strip_size_kb": 0, 00:12:22.025 "state": "configuring", 00:12:22.025 "raid_level": "raid1", 00:12:22.025 "superblock": true, 00:12:22.025 "num_base_bdevs": 2, 00:12:22.025 "num_base_bdevs_discovered": 1, 00:12:22.025 "num_base_bdevs_operational": 2, 00:12:22.025 "base_bdevs_list": [ 00:12:22.025 { 00:12:22.025 "name": "pt1", 00:12:22.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.025 "is_configured": true, 00:12:22.025 "data_offset": 2048, 00:12:22.025 "data_size": 63488 00:12:22.025 }, 00:12:22.025 { 00:12:22.025 "name": null, 00:12:22.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.025 "is_configured": false, 00:12:22.025 "data_offset": 2048, 00:12:22.025 "data_size": 63488 00:12:22.025 } 00:12:22.025 ] 00:12:22.025 }' 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.025 14:11:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.595 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:22.595 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:22.595 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:22.595 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:22.595 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.595 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.595 [2024-11-27 14:11:53.245438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:22.595 [2024-11-27 14:11:53.245544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.595 [2024-11-27 14:11:53.245581] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:22.595 [2024-11-27 14:11:53.245594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.596 [2024-11-27 14:11:53.246091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.596 [2024-11-27 14:11:53.246115] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:22.596 [2024-11-27 14:11:53.246233] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:22.596 [2024-11-27 14:11:53.246264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:22.596 [2024-11-27 14:11:53.246397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:22.596 [2024-11-27 14:11:53.246411] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.596 [2024-11-27 14:11:53.246683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:22.596 [2024-11-27 14:11:53.246867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:22.596 [2024-11-27 14:11:53.246885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:22.596 [2024-11-27 14:11:53.247045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.596 pt2 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.596 "name": "raid_bdev1", 00:12:22.596 "uuid": "5b784954-40a1-40a4-807e-8f7b0fa899da", 00:12:22.596 "strip_size_kb": 0, 00:12:22.596 "state": "online", 00:12:22.596 "raid_level": "raid1", 00:12:22.596 "superblock": true, 00:12:22.596 "num_base_bdevs": 2, 00:12:22.596 "num_base_bdevs_discovered": 2, 00:12:22.596 "num_base_bdevs_operational": 2, 00:12:22.596 "base_bdevs_list": [ 00:12:22.596 { 00:12:22.596 "name": "pt1", 00:12:22.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.596 "is_configured": true, 00:12:22.596 "data_offset": 2048, 00:12:22.596 "data_size": 63488 00:12:22.596 }, 00:12:22.596 { 00:12:22.596 "name": "pt2", 00:12:22.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.596 "is_configured": true, 00:12:22.596 "data_offset": 2048, 00:12:22.596 "data_size": 63488 00:12:22.596 } 00:12:22.596 ] 00:12:22.596 }' 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.596 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.856 [2024-11-27 14:11:53.645036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.856 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.856 "name": "raid_bdev1", 00:12:22.856 "aliases": [ 00:12:22.856 "5b784954-40a1-40a4-807e-8f7b0fa899da" 00:12:22.856 ], 00:12:22.856 "product_name": "Raid Volume", 00:12:22.856 "block_size": 512, 00:12:22.856 "num_blocks": 63488, 00:12:22.856 "uuid": "5b784954-40a1-40a4-807e-8f7b0fa899da", 00:12:22.856 "assigned_rate_limits": { 00:12:22.856 "rw_ios_per_sec": 0, 00:12:22.856 "rw_mbytes_per_sec": 0, 00:12:22.856 "r_mbytes_per_sec": 0, 00:12:22.856 "w_mbytes_per_sec": 0 00:12:22.856 }, 00:12:22.856 "claimed": false, 00:12:22.856 "zoned": false, 00:12:22.856 "supported_io_types": { 00:12:22.856 "read": true, 00:12:22.856 "write": true, 00:12:22.856 "unmap": false, 00:12:22.856 "flush": false, 00:12:22.856 "reset": true, 00:12:22.856 "nvme_admin": false, 00:12:22.857 "nvme_io": false, 00:12:22.857 "nvme_io_md": false, 00:12:22.857 "write_zeroes": true, 00:12:22.857 "zcopy": false, 00:12:22.857 "get_zone_info": false, 00:12:22.857 "zone_management": false, 00:12:22.857 "zone_append": false, 00:12:22.857 "compare": false, 00:12:22.857 "compare_and_write": false, 00:12:22.857 "abort": false, 00:12:22.857 "seek_hole": false, 00:12:22.857 "seek_data": false, 00:12:22.857 "copy": false, 00:12:22.857 "nvme_iov_md": false 00:12:22.857 }, 00:12:22.857 "memory_domains": [ 00:12:22.857 { 00:12:22.857 "dma_device_id": "system", 00:12:22.857 "dma_device_type": 1 00:12:22.857 }, 00:12:22.857 { 00:12:22.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.857 "dma_device_type": 2 00:12:22.857 }, 00:12:22.857 { 00:12:22.857 "dma_device_id": "system", 00:12:22.857 "dma_device_type": 1 00:12:22.857 }, 00:12:22.857 { 00:12:22.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.857 "dma_device_type": 2 00:12:22.857 } 00:12:22.857 ], 00:12:22.857 "driver_specific": { 00:12:22.857 "raid": { 00:12:22.857 "uuid": "5b784954-40a1-40a4-807e-8f7b0fa899da", 00:12:22.857 "strip_size_kb": 0, 00:12:22.857 "state": "online", 00:12:22.857 "raid_level": "raid1", 00:12:22.857 "superblock": true, 00:12:22.857 "num_base_bdevs": 2, 00:12:22.857 "num_base_bdevs_discovered": 2, 00:12:22.857 "num_base_bdevs_operational": 2, 00:12:22.857 "base_bdevs_list": [ 00:12:22.857 { 00:12:22.857 "name": "pt1", 00:12:22.857 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.857 "is_configured": true, 00:12:22.857 "data_offset": 2048, 00:12:22.857 "data_size": 63488 00:12:22.857 }, 00:12:22.857 { 00:12:22.857 "name": "pt2", 00:12:22.857 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.857 "is_configured": true, 00:12:22.857 "data_offset": 2048, 00:12:22.857 "data_size": 63488 00:12:22.857 } 00:12:22.857 ] 00:12:22.857 } 00:12:22.857 } 00:12:22.857 }' 00:12:22.857 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.857 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:22.857 pt2' 00:12:22.857 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.857 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.857 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.857 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:22.857 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.857 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.857 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.118 [2024-11-27 14:11:53.900638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5b784954-40a1-40a4-807e-8f7b0fa899da '!=' 5b784954-40a1-40a4-807e-8f7b0fa899da ']' 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.118 [2024-11-27 14:11:53.944345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.118 "name": "raid_bdev1", 00:12:23.118 "uuid": "5b784954-40a1-40a4-807e-8f7b0fa899da", 00:12:23.118 "strip_size_kb": 0, 00:12:23.118 "state": "online", 00:12:23.118 "raid_level": "raid1", 00:12:23.118 "superblock": true, 00:12:23.118 "num_base_bdevs": 2, 00:12:23.118 "num_base_bdevs_discovered": 1, 00:12:23.118 "num_base_bdevs_operational": 1, 00:12:23.118 "base_bdevs_list": [ 00:12:23.118 { 00:12:23.118 "name": null, 00:12:23.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.118 "is_configured": false, 00:12:23.118 "data_offset": 0, 00:12:23.118 "data_size": 63488 00:12:23.118 }, 00:12:23.118 { 00:12:23.118 "name": "pt2", 00:12:23.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.118 "is_configured": true, 00:12:23.118 "data_offset": 2048, 00:12:23.118 "data_size": 63488 00:12:23.118 } 00:12:23.118 ] 00:12:23.118 }' 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.118 14:11:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.689 [2024-11-27 14:11:54.395552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.689 [2024-11-27 14:11:54.395650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.689 [2024-11-27 14:11:54.395769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.689 [2024-11-27 14:11:54.395854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.689 [2024-11-27 14:11:54.395906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.689 [2024-11-27 14:11:54.471433] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:23.689 [2024-11-27 14:11:54.471511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.689 [2024-11-27 14:11:54.471531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:23.689 [2024-11-27 14:11:54.471543] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.689 [2024-11-27 14:11:54.474053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.689 [2024-11-27 14:11:54.474099] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:23.689 [2024-11-27 14:11:54.474285] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:23.689 [2024-11-27 14:11:54.474378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:23.689 [2024-11-27 14:11:54.474523] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:23.689 [2024-11-27 14:11:54.474539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:23.689 [2024-11-27 14:11:54.474799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:23.689 [2024-11-27 14:11:54.474970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:23.689 [2024-11-27 14:11:54.474981] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:23.689 [2024-11-27 14:11:54.475175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.689 pt2 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.689 "name": "raid_bdev1", 00:12:23.689 "uuid": "5b784954-40a1-40a4-807e-8f7b0fa899da", 00:12:23.689 "strip_size_kb": 0, 00:12:23.689 "state": "online", 00:12:23.689 "raid_level": "raid1", 00:12:23.689 "superblock": true, 00:12:23.689 "num_base_bdevs": 2, 00:12:23.689 "num_base_bdevs_discovered": 1, 00:12:23.689 "num_base_bdevs_operational": 1, 00:12:23.689 "base_bdevs_list": [ 00:12:23.689 { 00:12:23.689 "name": null, 00:12:23.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.689 "is_configured": false, 00:12:23.689 "data_offset": 2048, 00:12:23.689 "data_size": 63488 00:12:23.689 }, 00:12:23.689 { 00:12:23.689 "name": "pt2", 00:12:23.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.689 "is_configured": true, 00:12:23.689 "data_offset": 2048, 00:12:23.689 "data_size": 63488 00:12:23.689 } 00:12:23.689 ] 00:12:23.689 }' 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.689 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.950 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.950 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.950 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.950 [2024-11-27 14:11:54.894715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.950 [2024-11-27 14:11:54.894757] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.950 [2024-11-27 14:11:54.894853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.950 [2024-11-27 14:11:54.894914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.950 [2024-11-27 14:11:54.894926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:23.950 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.950 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.950 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.950 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.209 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:24.209 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.209 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:24.209 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:24.209 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:12:24.209 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:24.209 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.209 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.209 [2024-11-27 14:11:54.958646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:24.209 [2024-11-27 14:11:54.958724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.209 [2024-11-27 14:11:54.958748] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:24.209 [2024-11-27 14:11:54.958758] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.209 [2024-11-27 14:11:54.961255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.209 [2024-11-27 14:11:54.961298] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:24.209 [2024-11-27 14:11:54.961405] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:24.209 [2024-11-27 14:11:54.961460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:24.209 [2024-11-27 14:11:54.961624] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:24.209 [2024-11-27 14:11:54.961638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.209 [2024-11-27 14:11:54.961655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:24.209 [2024-11-27 14:11:54.961728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.209 [2024-11-27 14:11:54.961805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:24.210 [2024-11-27 14:11:54.961822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:24.210 [2024-11-27 14:11:54.962099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:24.210 [2024-11-27 14:11:54.962282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:24.210 [2024-11-27 14:11:54.962298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:24.210 [2024-11-27 14:11:54.962470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.210 pt1 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.210 14:11:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.210 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.210 "name": "raid_bdev1", 00:12:24.210 "uuid": "5b784954-40a1-40a4-807e-8f7b0fa899da", 00:12:24.210 "strip_size_kb": 0, 00:12:24.210 "state": "online", 00:12:24.210 "raid_level": "raid1", 00:12:24.210 "superblock": true, 00:12:24.210 "num_base_bdevs": 2, 00:12:24.210 "num_base_bdevs_discovered": 1, 00:12:24.210 "num_base_bdevs_operational": 1, 00:12:24.210 "base_bdevs_list": [ 00:12:24.210 { 00:12:24.210 "name": null, 00:12:24.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.210 "is_configured": false, 00:12:24.210 "data_offset": 2048, 00:12:24.210 "data_size": 63488 00:12:24.210 }, 00:12:24.210 { 00:12:24.210 "name": "pt2", 00:12:24.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.210 "is_configured": true, 00:12:24.210 "data_offset": 2048, 00:12:24.210 "data_size": 63488 00:12:24.210 } 00:12:24.210 ] 00:12:24.210 }' 00:12:24.210 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.210 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.468 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:24.468 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.468 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:24.468 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.727 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.727 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:24.727 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:24.727 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:24.727 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.727 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.727 [2024-11-27 14:11:55.478014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5b784954-40a1-40a4-807e-8f7b0fa899da '!=' 5b784954-40a1-40a4-807e-8f7b0fa899da ']' 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63378 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63378 ']' 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63378 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63378 00:12:24.728 killing process with pid 63378 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63378' 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63378 00:12:24.728 [2024-11-27 14:11:55.561408] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.728 [2024-11-27 14:11:55.561528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.728 14:11:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63378 00:12:24.728 [2024-11-27 14:11:55.561584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.728 [2024-11-27 14:11:55.561600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:24.987 [2024-11-27 14:11:55.811931] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.367 14:11:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:26.367 00:12:26.367 real 0m6.459s 00:12:26.367 user 0m9.744s 00:12:26.367 sys 0m1.064s 00:12:26.367 14:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.367 14:11:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.367 ************************************ 00:12:26.367 END TEST raid_superblock_test 00:12:26.367 ************************************ 00:12:26.367 14:11:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:12:26.367 14:11:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:26.367 14:11:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.367 14:11:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.367 ************************************ 00:12:26.367 START TEST raid_read_error_test 00:12:26.367 ************************************ 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SxUcNQXiKd 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63714 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63714 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63714 ']' 00:12:26.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.367 14:11:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.367 [2024-11-27 14:11:57.268315] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:26.367 [2024-11-27 14:11:57.268479] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63714 ] 00:12:26.629 [2024-11-27 14:11:57.432411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.629 [2024-11-27 14:11:57.564366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.889 [2024-11-27 14:11:57.788070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.889 [2024-11-27 14:11:57.788267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.458 BaseBdev1_malloc 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.458 true 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.458 [2024-11-27 14:11:58.251345] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:27.458 [2024-11-27 14:11:58.251420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.458 [2024-11-27 14:11:58.251447] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:27.458 [2024-11-27 14:11:58.251460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.458 [2024-11-27 14:11:58.254017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.458 [2024-11-27 14:11:58.254077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:27.458 BaseBdev1 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.458 BaseBdev2_malloc 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.458 true 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.458 [2024-11-27 14:11:58.321906] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:27.458 [2024-11-27 14:11:58.321966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.458 [2024-11-27 14:11:58.322002] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:27.458 [2024-11-27 14:11:58.322014] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.458 [2024-11-27 14:11:58.324409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.458 [2024-11-27 14:11:58.324500] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:27.458 BaseBdev2 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.458 [2024-11-27 14:11:58.333952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:27.458 [2024-11-27 14:11:58.335963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.458 [2024-11-27 14:11:58.336265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:27.458 [2024-11-27 14:11:58.336288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.458 [2024-11-27 14:11:58.336574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:27.458 [2024-11-27 14:11:58.336787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:27.458 [2024-11-27 14:11:58.336799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:27.458 [2024-11-27 14:11:58.336991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.458 "name": "raid_bdev1", 00:12:27.458 "uuid": "3cd6d51c-5b68-4882-8c1d-13e50823edb0", 00:12:27.458 "strip_size_kb": 0, 00:12:27.458 "state": "online", 00:12:27.458 "raid_level": "raid1", 00:12:27.458 "superblock": true, 00:12:27.458 "num_base_bdevs": 2, 00:12:27.458 "num_base_bdevs_discovered": 2, 00:12:27.458 "num_base_bdevs_operational": 2, 00:12:27.458 "base_bdevs_list": [ 00:12:27.458 { 00:12:27.458 "name": "BaseBdev1", 00:12:27.458 "uuid": "0b52a0f0-bd1b-59d2-812a-454196d6ab8e", 00:12:27.458 "is_configured": true, 00:12:27.458 "data_offset": 2048, 00:12:27.458 "data_size": 63488 00:12:27.458 }, 00:12:27.458 { 00:12:27.458 "name": "BaseBdev2", 00:12:27.458 "uuid": "422a97aa-412f-5cd3-b5f9-7da66fa459a3", 00:12:27.458 "is_configured": true, 00:12:27.458 "data_offset": 2048, 00:12:27.458 "data_size": 63488 00:12:27.458 } 00:12:27.458 ] 00:12:27.458 }' 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.458 14:11:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.042 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:28.042 14:11:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:28.042 [2024-11-27 14:11:58.894610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.979 "name": "raid_bdev1", 00:12:28.979 "uuid": "3cd6d51c-5b68-4882-8c1d-13e50823edb0", 00:12:28.979 "strip_size_kb": 0, 00:12:28.979 "state": "online", 00:12:28.979 "raid_level": "raid1", 00:12:28.979 "superblock": true, 00:12:28.979 "num_base_bdevs": 2, 00:12:28.979 "num_base_bdevs_discovered": 2, 00:12:28.979 "num_base_bdevs_operational": 2, 00:12:28.979 "base_bdevs_list": [ 00:12:28.979 { 00:12:28.979 "name": "BaseBdev1", 00:12:28.979 "uuid": "0b52a0f0-bd1b-59d2-812a-454196d6ab8e", 00:12:28.979 "is_configured": true, 00:12:28.979 "data_offset": 2048, 00:12:28.979 "data_size": 63488 00:12:28.979 }, 00:12:28.979 { 00:12:28.979 "name": "BaseBdev2", 00:12:28.979 "uuid": "422a97aa-412f-5cd3-b5f9-7da66fa459a3", 00:12:28.979 "is_configured": true, 00:12:28.979 "data_offset": 2048, 00:12:28.979 "data_size": 63488 00:12:28.979 } 00:12:28.979 ] 00:12:28.979 }' 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.979 14:11:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.544 [2024-11-27 14:12:00.256176] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.544 [2024-11-27 14:12:00.256292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.544 [2024-11-27 14:12:00.259645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.544 [2024-11-27 14:12:00.259749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.544 [2024-11-27 14:12:00.259866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.544 [2024-11-27 14:12:00.259923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:29.544 { 00:12:29.544 "results": [ 00:12:29.544 { 00:12:29.544 "job": "raid_bdev1", 00:12:29.544 "core_mask": "0x1", 00:12:29.544 "workload": "randrw", 00:12:29.544 "percentage": 50, 00:12:29.544 "status": "finished", 00:12:29.544 "queue_depth": 1, 00:12:29.544 "io_size": 131072, 00:12:29.544 "runtime": 1.362184, 00:12:29.544 "iops": 15414.951284114333, 00:12:29.544 "mibps": 1926.8689105142917, 00:12:29.544 "io_failed": 0, 00:12:29.544 "io_timeout": 0, 00:12:29.544 "avg_latency_us": 61.72978403848817, 00:12:29.544 "min_latency_us": 25.6, 00:12:29.544 "max_latency_us": 1788.646288209607 00:12:29.544 } 00:12:29.544 ], 00:12:29.544 "core_count": 1 00:12:29.544 } 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63714 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63714 ']' 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63714 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63714 00:12:29.544 killing process with pid 63714 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63714' 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63714 00:12:29.544 14:12:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63714 00:12:29.544 [2024-11-27 14:12:00.302931] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:29.544 [2024-11-27 14:12:00.460708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.919 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:30.919 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:30.919 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SxUcNQXiKd 00:12:30.919 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:30.919 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:30.919 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:30.919 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:30.919 14:12:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:31.178 ************************************ 00:12:31.178 END TEST raid_read_error_test 00:12:31.178 ************************************ 00:12:31.178 00:12:31.178 real 0m4.717s 00:12:31.178 user 0m5.659s 00:12:31.178 sys 0m0.565s 00:12:31.178 14:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.178 14:12:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.178 14:12:01 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:12:31.178 14:12:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:31.178 14:12:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.178 14:12:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.178 ************************************ 00:12:31.178 START TEST raid_write_error_test 00:12:31.178 ************************************ 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Z1g0CA1IYT 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63859 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63859 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63859 ']' 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.178 14:12:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.178 [2024-11-27 14:12:02.044083] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:31.178 [2024-11-27 14:12:02.044318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63859 ] 00:12:31.438 [2024-11-27 14:12:02.227986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.438 [2024-11-27 14:12:02.364461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.698 [2024-11-27 14:12:02.606808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.698 [2024-11-27 14:12:02.606889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.265 14:12:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.265 14:12:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:32.265 14:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.265 14:12:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:32.265 14:12:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.265 14:12:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.265 BaseBdev1_malloc 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.265 true 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.265 [2024-11-27 14:12:03.032510] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:32.265 [2024-11-27 14:12:03.032692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.265 [2024-11-27 14:12:03.032729] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:32.265 [2024-11-27 14:12:03.032745] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.265 [2024-11-27 14:12:03.035408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.265 [2024-11-27 14:12:03.035459] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.265 BaseBdev1 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.265 BaseBdev2_malloc 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.265 true 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.265 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.265 [2024-11-27 14:12:03.109845] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:32.266 [2024-11-27 14:12:03.109970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.266 [2024-11-27 14:12:03.110015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:32.266 [2024-11-27 14:12:03.110054] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.266 [2024-11-27 14:12:03.112619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.266 [2024-11-27 14:12:03.112705] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:32.266 BaseBdev2 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.266 [2024-11-27 14:12:03.121893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.266 [2024-11-27 14:12:03.124132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.266 [2024-11-27 14:12:03.124435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:32.266 [2024-11-27 14:12:03.124495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.266 [2024-11-27 14:12:03.124842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:32.266 [2024-11-27 14:12:03.125114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:32.266 [2024-11-27 14:12:03.125178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:32.266 [2024-11-27 14:12:03.125434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.266 "name": "raid_bdev1", 00:12:32.266 "uuid": "c1f95cb1-a69b-4afe-880e-5f912dde29fc", 00:12:32.266 "strip_size_kb": 0, 00:12:32.266 "state": "online", 00:12:32.266 "raid_level": "raid1", 00:12:32.266 "superblock": true, 00:12:32.266 "num_base_bdevs": 2, 00:12:32.266 "num_base_bdevs_discovered": 2, 00:12:32.266 "num_base_bdevs_operational": 2, 00:12:32.266 "base_bdevs_list": [ 00:12:32.266 { 00:12:32.266 "name": "BaseBdev1", 00:12:32.266 "uuid": "9346ff6c-bf09-5d61-a22f-703654265cfc", 00:12:32.266 "is_configured": true, 00:12:32.266 "data_offset": 2048, 00:12:32.266 "data_size": 63488 00:12:32.266 }, 00:12:32.266 { 00:12:32.266 "name": "BaseBdev2", 00:12:32.266 "uuid": "bd4a1c13-13f5-5ac9-a39c-06c00996bae8", 00:12:32.266 "is_configured": true, 00:12:32.266 "data_offset": 2048, 00:12:32.266 "data_size": 63488 00:12:32.266 } 00:12:32.266 ] 00:12:32.266 }' 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.266 14:12:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.833 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:32.833 14:12:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:32.833 [2024-11-27 14:12:03.650593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:33.782 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:33.782 14:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.782 14:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.782 [2024-11-27 14:12:04.551744] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:33.782 [2024-11-27 14:12:04.551824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.782 [2024-11-27 14:12:04.552034] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:12:33.782 14:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.782 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:33.782 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:33.782 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:33.782 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:12:33.782 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:33.782 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.782 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.783 "name": "raid_bdev1", 00:12:33.783 "uuid": "c1f95cb1-a69b-4afe-880e-5f912dde29fc", 00:12:33.783 "strip_size_kb": 0, 00:12:33.783 "state": "online", 00:12:33.783 "raid_level": "raid1", 00:12:33.783 "superblock": true, 00:12:33.783 "num_base_bdevs": 2, 00:12:33.783 "num_base_bdevs_discovered": 1, 00:12:33.783 "num_base_bdevs_operational": 1, 00:12:33.783 "base_bdevs_list": [ 00:12:33.783 { 00:12:33.783 "name": null, 00:12:33.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.783 "is_configured": false, 00:12:33.783 "data_offset": 0, 00:12:33.783 "data_size": 63488 00:12:33.783 }, 00:12:33.783 { 00:12:33.783 "name": "BaseBdev2", 00:12:33.783 "uuid": "bd4a1c13-13f5-5ac9-a39c-06c00996bae8", 00:12:33.783 "is_configured": true, 00:12:33.783 "data_offset": 2048, 00:12:33.783 "data_size": 63488 00:12:33.783 } 00:12:33.783 ] 00:12:33.783 }' 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.783 14:12:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.349 [2024-11-27 14:12:05.046442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.349 [2024-11-27 14:12:05.046568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.349 [2024-11-27 14:12:05.049902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.349 [2024-11-27 14:12:05.049997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.349 [2024-11-27 14:12:05.050091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.349 [2024-11-27 14:12:05.050175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:34.349 { 00:12:34.349 "results": [ 00:12:34.349 { 00:12:34.349 "job": "raid_bdev1", 00:12:34.349 "core_mask": "0x1", 00:12:34.349 "workload": "randrw", 00:12:34.349 "percentage": 50, 00:12:34.349 "status": "finished", 00:12:34.349 "queue_depth": 1, 00:12:34.349 "io_size": 131072, 00:12:34.349 "runtime": 1.396613, 00:12:34.349 "iops": 16934.540921500804, 00:12:34.349 "mibps": 2116.8176151876005, 00:12:34.349 "io_failed": 0, 00:12:34.349 "io_timeout": 0, 00:12:34.349 "avg_latency_us": 55.72695553369882, 00:12:34.349 "min_latency_us": 27.94759825327511, 00:12:34.349 "max_latency_us": 1767.1825327510917 00:12:34.349 } 00:12:34.349 ], 00:12:34.349 "core_count": 1 00:12:34.349 } 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63859 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63859 ']' 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63859 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63859 00:12:34.349 killing process with pid 63859 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63859' 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63859 00:12:34.349 [2024-11-27 14:12:05.092140] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.349 14:12:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63859 00:12:34.349 [2024-11-27 14:12:05.256505] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:35.726 14:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:35.726 14:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Z1g0CA1IYT 00:12:35.726 14:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:35.726 14:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:35.726 14:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:35.726 14:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:35.726 14:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:35.726 14:12:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:35.726 00:12:35.726 real 0m4.740s 00:12:35.726 user 0m5.686s 00:12:35.726 sys 0m0.576s 00:12:35.726 14:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.726 ************************************ 00:12:35.726 END TEST raid_write_error_test 00:12:35.726 ************************************ 00:12:35.726 14:12:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.984 14:12:06 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:35.984 14:12:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:35.984 14:12:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:12:35.984 14:12:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:35.984 14:12:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:35.984 14:12:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:35.984 ************************************ 00:12:35.984 START TEST raid_state_function_test 00:12:35.984 ************************************ 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:35.984 Process raid pid: 64003 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64003 00:12:35.984 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:35.985 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64003' 00:12:35.985 14:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64003 00:12:35.985 14:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64003 ']' 00:12:35.985 14:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.985 14:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.985 14:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.985 14:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.985 14:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.985 [2024-11-27 14:12:06.838797] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:35.985 [2024-11-27 14:12:06.838934] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.244 [2024-11-27 14:12:07.016617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.244 [2024-11-27 14:12:07.149916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.503 [2024-11-27 14:12:07.377018] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.503 [2024-11-27 14:12:07.377177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.761 14:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.761 14:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:36.761 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:36.761 14:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.761 14:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.020 [2024-11-27 14:12:07.717392] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:37.020 [2024-11-27 14:12:07.717455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:37.020 [2024-11-27 14:12:07.717467] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:37.020 [2024-11-27 14:12:07.717495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:37.020 [2024-11-27 14:12:07.717503] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:37.020 [2024-11-27 14:12:07.717513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.020 "name": "Existed_Raid", 00:12:37.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.020 "strip_size_kb": 64, 00:12:37.020 "state": "configuring", 00:12:37.020 "raid_level": "raid0", 00:12:37.020 "superblock": false, 00:12:37.020 "num_base_bdevs": 3, 00:12:37.020 "num_base_bdevs_discovered": 0, 00:12:37.020 "num_base_bdevs_operational": 3, 00:12:37.020 "base_bdevs_list": [ 00:12:37.020 { 00:12:37.020 "name": "BaseBdev1", 00:12:37.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.020 "is_configured": false, 00:12:37.020 "data_offset": 0, 00:12:37.020 "data_size": 0 00:12:37.020 }, 00:12:37.020 { 00:12:37.020 "name": "BaseBdev2", 00:12:37.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.020 "is_configured": false, 00:12:37.020 "data_offset": 0, 00:12:37.020 "data_size": 0 00:12:37.020 }, 00:12:37.020 { 00:12:37.020 "name": "BaseBdev3", 00:12:37.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.020 "is_configured": false, 00:12:37.020 "data_offset": 0, 00:12:37.020 "data_size": 0 00:12:37.020 } 00:12:37.020 ] 00:12:37.020 }' 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.020 14:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.279 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:37.279 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.279 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.279 [2024-11-27 14:12:08.172575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:37.279 [2024-11-27 14:12:08.172672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:37.279 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.279 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:37.279 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.279 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.279 [2024-11-27 14:12:08.184561] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:37.279 [2024-11-27 14:12:08.184660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:37.279 [2024-11-27 14:12:08.184694] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:37.279 [2024-11-27 14:12:08.184723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:37.279 [2024-11-27 14:12:08.184747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:37.280 [2024-11-27 14:12:08.184773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:37.280 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.280 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:37.280 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.280 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.538 [2024-11-27 14:12:08.237858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.538 BaseBdev1 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.538 [ 00:12:37.538 { 00:12:37.538 "name": "BaseBdev1", 00:12:37.538 "aliases": [ 00:12:37.538 "eec4e505-e33d-436f-8c9e-666fd914c22b" 00:12:37.538 ], 00:12:37.538 "product_name": "Malloc disk", 00:12:37.538 "block_size": 512, 00:12:37.538 "num_blocks": 65536, 00:12:37.538 "uuid": "eec4e505-e33d-436f-8c9e-666fd914c22b", 00:12:37.538 "assigned_rate_limits": { 00:12:37.538 "rw_ios_per_sec": 0, 00:12:37.538 "rw_mbytes_per_sec": 0, 00:12:37.538 "r_mbytes_per_sec": 0, 00:12:37.538 "w_mbytes_per_sec": 0 00:12:37.538 }, 00:12:37.538 "claimed": true, 00:12:37.538 "claim_type": "exclusive_write", 00:12:37.538 "zoned": false, 00:12:37.538 "supported_io_types": { 00:12:37.538 "read": true, 00:12:37.538 "write": true, 00:12:37.538 "unmap": true, 00:12:37.538 "flush": true, 00:12:37.538 "reset": true, 00:12:37.538 "nvme_admin": false, 00:12:37.538 "nvme_io": false, 00:12:37.538 "nvme_io_md": false, 00:12:37.538 "write_zeroes": true, 00:12:37.538 "zcopy": true, 00:12:37.538 "get_zone_info": false, 00:12:37.538 "zone_management": false, 00:12:37.538 "zone_append": false, 00:12:37.538 "compare": false, 00:12:37.538 "compare_and_write": false, 00:12:37.538 "abort": true, 00:12:37.538 "seek_hole": false, 00:12:37.538 "seek_data": false, 00:12:37.538 "copy": true, 00:12:37.538 "nvme_iov_md": false 00:12:37.538 }, 00:12:37.538 "memory_domains": [ 00:12:37.538 { 00:12:37.538 "dma_device_id": "system", 00:12:37.538 "dma_device_type": 1 00:12:37.538 }, 00:12:37.538 { 00:12:37.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.538 "dma_device_type": 2 00:12:37.538 } 00:12:37.538 ], 00:12:37.538 "driver_specific": {} 00:12:37.538 } 00:12:37.538 ] 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.538 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.539 "name": "Existed_Raid", 00:12:37.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.539 "strip_size_kb": 64, 00:12:37.539 "state": "configuring", 00:12:37.539 "raid_level": "raid0", 00:12:37.539 "superblock": false, 00:12:37.539 "num_base_bdevs": 3, 00:12:37.539 "num_base_bdevs_discovered": 1, 00:12:37.539 "num_base_bdevs_operational": 3, 00:12:37.539 "base_bdevs_list": [ 00:12:37.539 { 00:12:37.539 "name": "BaseBdev1", 00:12:37.539 "uuid": "eec4e505-e33d-436f-8c9e-666fd914c22b", 00:12:37.539 "is_configured": true, 00:12:37.539 "data_offset": 0, 00:12:37.539 "data_size": 65536 00:12:37.539 }, 00:12:37.539 { 00:12:37.539 "name": "BaseBdev2", 00:12:37.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.539 "is_configured": false, 00:12:37.539 "data_offset": 0, 00:12:37.539 "data_size": 0 00:12:37.539 }, 00:12:37.539 { 00:12:37.539 "name": "BaseBdev3", 00:12:37.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.539 "is_configured": false, 00:12:37.539 "data_offset": 0, 00:12:37.539 "data_size": 0 00:12:37.539 } 00:12:37.539 ] 00:12:37.539 }' 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.539 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.798 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.058 [2024-11-27 14:12:08.757045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.058 [2024-11-27 14:12:08.757181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.058 [2024-11-27 14:12:08.769071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.058 [2024-11-27 14:12:08.771142] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.058 [2024-11-27 14:12:08.771182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.058 [2024-11-27 14:12:08.771192] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:38.058 [2024-11-27 14:12:08.771201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.058 "name": "Existed_Raid", 00:12:38.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.058 "strip_size_kb": 64, 00:12:38.058 "state": "configuring", 00:12:38.058 "raid_level": "raid0", 00:12:38.058 "superblock": false, 00:12:38.058 "num_base_bdevs": 3, 00:12:38.058 "num_base_bdevs_discovered": 1, 00:12:38.058 "num_base_bdevs_operational": 3, 00:12:38.058 "base_bdevs_list": [ 00:12:38.058 { 00:12:38.058 "name": "BaseBdev1", 00:12:38.058 "uuid": "eec4e505-e33d-436f-8c9e-666fd914c22b", 00:12:38.058 "is_configured": true, 00:12:38.058 "data_offset": 0, 00:12:38.058 "data_size": 65536 00:12:38.058 }, 00:12:38.058 { 00:12:38.058 "name": "BaseBdev2", 00:12:38.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.058 "is_configured": false, 00:12:38.058 "data_offset": 0, 00:12:38.058 "data_size": 0 00:12:38.058 }, 00:12:38.058 { 00:12:38.058 "name": "BaseBdev3", 00:12:38.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.058 "is_configured": false, 00:12:38.058 "data_offset": 0, 00:12:38.058 "data_size": 0 00:12:38.058 } 00:12:38.058 ] 00:12:38.058 }' 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.058 14:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.318 [2024-11-27 14:12:09.259817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:38.318 BaseBdev2 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.318 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.577 [ 00:12:38.577 { 00:12:38.577 "name": "BaseBdev2", 00:12:38.577 "aliases": [ 00:12:38.577 "844c39f5-4b4a-4c14-a645-3b1390870f56" 00:12:38.577 ], 00:12:38.577 "product_name": "Malloc disk", 00:12:38.577 "block_size": 512, 00:12:38.577 "num_blocks": 65536, 00:12:38.577 "uuid": "844c39f5-4b4a-4c14-a645-3b1390870f56", 00:12:38.577 "assigned_rate_limits": { 00:12:38.577 "rw_ios_per_sec": 0, 00:12:38.577 "rw_mbytes_per_sec": 0, 00:12:38.577 "r_mbytes_per_sec": 0, 00:12:38.577 "w_mbytes_per_sec": 0 00:12:38.577 }, 00:12:38.577 "claimed": true, 00:12:38.577 "claim_type": "exclusive_write", 00:12:38.577 "zoned": false, 00:12:38.577 "supported_io_types": { 00:12:38.577 "read": true, 00:12:38.577 "write": true, 00:12:38.577 "unmap": true, 00:12:38.577 "flush": true, 00:12:38.577 "reset": true, 00:12:38.577 "nvme_admin": false, 00:12:38.577 "nvme_io": false, 00:12:38.577 "nvme_io_md": false, 00:12:38.577 "write_zeroes": true, 00:12:38.577 "zcopy": true, 00:12:38.577 "get_zone_info": false, 00:12:38.577 "zone_management": false, 00:12:38.577 "zone_append": false, 00:12:38.577 "compare": false, 00:12:38.577 "compare_and_write": false, 00:12:38.577 "abort": true, 00:12:38.577 "seek_hole": false, 00:12:38.577 "seek_data": false, 00:12:38.577 "copy": true, 00:12:38.577 "nvme_iov_md": false 00:12:38.577 }, 00:12:38.577 "memory_domains": [ 00:12:38.577 { 00:12:38.577 "dma_device_id": "system", 00:12:38.577 "dma_device_type": 1 00:12:38.577 }, 00:12:38.577 { 00:12:38.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.577 "dma_device_type": 2 00:12:38.577 } 00:12:38.577 ], 00:12:38.577 "driver_specific": {} 00:12:38.577 } 00:12:38.577 ] 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.577 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.578 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.578 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.578 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.578 "name": "Existed_Raid", 00:12:38.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.578 "strip_size_kb": 64, 00:12:38.578 "state": "configuring", 00:12:38.578 "raid_level": "raid0", 00:12:38.578 "superblock": false, 00:12:38.578 "num_base_bdevs": 3, 00:12:38.578 "num_base_bdevs_discovered": 2, 00:12:38.578 "num_base_bdevs_operational": 3, 00:12:38.578 "base_bdevs_list": [ 00:12:38.578 { 00:12:38.578 "name": "BaseBdev1", 00:12:38.578 "uuid": "eec4e505-e33d-436f-8c9e-666fd914c22b", 00:12:38.578 "is_configured": true, 00:12:38.578 "data_offset": 0, 00:12:38.578 "data_size": 65536 00:12:38.578 }, 00:12:38.578 { 00:12:38.578 "name": "BaseBdev2", 00:12:38.578 "uuid": "844c39f5-4b4a-4c14-a645-3b1390870f56", 00:12:38.578 "is_configured": true, 00:12:38.578 "data_offset": 0, 00:12:38.578 "data_size": 65536 00:12:38.578 }, 00:12:38.578 { 00:12:38.578 "name": "BaseBdev3", 00:12:38.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.578 "is_configured": false, 00:12:38.578 "data_offset": 0, 00:12:38.578 "data_size": 0 00:12:38.578 } 00:12:38.578 ] 00:12:38.578 }' 00:12:38.578 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.578 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.836 [2024-11-27 14:12:09.768558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:38.836 [2024-11-27 14:12:09.768669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:38.836 [2024-11-27 14:12:09.768690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:38.836 [2024-11-27 14:12:09.769009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:38.836 [2024-11-27 14:12:09.769249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:38.836 [2024-11-27 14:12:09.769266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:38.836 [2024-11-27 14:12:09.769573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.836 BaseBdev3 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.836 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.096 [ 00:12:39.096 { 00:12:39.096 "name": "BaseBdev3", 00:12:39.096 "aliases": [ 00:12:39.096 "a444b26d-f799-4a56-8d51-08b542840760" 00:12:39.096 ], 00:12:39.096 "product_name": "Malloc disk", 00:12:39.096 "block_size": 512, 00:12:39.096 "num_blocks": 65536, 00:12:39.096 "uuid": "a444b26d-f799-4a56-8d51-08b542840760", 00:12:39.096 "assigned_rate_limits": { 00:12:39.096 "rw_ios_per_sec": 0, 00:12:39.096 "rw_mbytes_per_sec": 0, 00:12:39.096 "r_mbytes_per_sec": 0, 00:12:39.096 "w_mbytes_per_sec": 0 00:12:39.096 }, 00:12:39.096 "claimed": true, 00:12:39.096 "claim_type": "exclusive_write", 00:12:39.096 "zoned": false, 00:12:39.096 "supported_io_types": { 00:12:39.096 "read": true, 00:12:39.096 "write": true, 00:12:39.096 "unmap": true, 00:12:39.096 "flush": true, 00:12:39.096 "reset": true, 00:12:39.096 "nvme_admin": false, 00:12:39.096 "nvme_io": false, 00:12:39.096 "nvme_io_md": false, 00:12:39.096 "write_zeroes": true, 00:12:39.096 "zcopy": true, 00:12:39.096 "get_zone_info": false, 00:12:39.096 "zone_management": false, 00:12:39.096 "zone_append": false, 00:12:39.096 "compare": false, 00:12:39.096 "compare_and_write": false, 00:12:39.096 "abort": true, 00:12:39.096 "seek_hole": false, 00:12:39.096 "seek_data": false, 00:12:39.096 "copy": true, 00:12:39.096 "nvme_iov_md": false 00:12:39.096 }, 00:12:39.096 "memory_domains": [ 00:12:39.096 { 00:12:39.096 "dma_device_id": "system", 00:12:39.096 "dma_device_type": 1 00:12:39.096 }, 00:12:39.096 { 00:12:39.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.096 "dma_device_type": 2 00:12:39.096 } 00:12:39.096 ], 00:12:39.096 "driver_specific": {} 00:12:39.096 } 00:12:39.096 ] 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.096 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.096 "name": "Existed_Raid", 00:12:39.096 "uuid": "450e88d6-6a3e-4fc9-b796-1e9312620239", 00:12:39.096 "strip_size_kb": 64, 00:12:39.096 "state": "online", 00:12:39.096 "raid_level": "raid0", 00:12:39.096 "superblock": false, 00:12:39.096 "num_base_bdevs": 3, 00:12:39.096 "num_base_bdevs_discovered": 3, 00:12:39.096 "num_base_bdevs_operational": 3, 00:12:39.096 "base_bdevs_list": [ 00:12:39.096 { 00:12:39.096 "name": "BaseBdev1", 00:12:39.096 "uuid": "eec4e505-e33d-436f-8c9e-666fd914c22b", 00:12:39.096 "is_configured": true, 00:12:39.096 "data_offset": 0, 00:12:39.096 "data_size": 65536 00:12:39.096 }, 00:12:39.096 { 00:12:39.096 "name": "BaseBdev2", 00:12:39.096 "uuid": "844c39f5-4b4a-4c14-a645-3b1390870f56", 00:12:39.096 "is_configured": true, 00:12:39.096 "data_offset": 0, 00:12:39.096 "data_size": 65536 00:12:39.096 }, 00:12:39.096 { 00:12:39.096 "name": "BaseBdev3", 00:12:39.096 "uuid": "a444b26d-f799-4a56-8d51-08b542840760", 00:12:39.096 "is_configured": true, 00:12:39.096 "data_offset": 0, 00:12:39.096 "data_size": 65536 00:12:39.096 } 00:12:39.096 ] 00:12:39.096 }' 00:12:39.097 14:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.097 14:12:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.356 [2024-11-27 14:12:10.284190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.356 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:39.356 "name": "Existed_Raid", 00:12:39.356 "aliases": [ 00:12:39.356 "450e88d6-6a3e-4fc9-b796-1e9312620239" 00:12:39.356 ], 00:12:39.356 "product_name": "Raid Volume", 00:12:39.356 "block_size": 512, 00:12:39.356 "num_blocks": 196608, 00:12:39.356 "uuid": "450e88d6-6a3e-4fc9-b796-1e9312620239", 00:12:39.356 "assigned_rate_limits": { 00:12:39.356 "rw_ios_per_sec": 0, 00:12:39.356 "rw_mbytes_per_sec": 0, 00:12:39.356 "r_mbytes_per_sec": 0, 00:12:39.356 "w_mbytes_per_sec": 0 00:12:39.356 }, 00:12:39.356 "claimed": false, 00:12:39.356 "zoned": false, 00:12:39.356 "supported_io_types": { 00:12:39.356 "read": true, 00:12:39.356 "write": true, 00:12:39.356 "unmap": true, 00:12:39.356 "flush": true, 00:12:39.356 "reset": true, 00:12:39.356 "nvme_admin": false, 00:12:39.356 "nvme_io": false, 00:12:39.356 "nvme_io_md": false, 00:12:39.356 "write_zeroes": true, 00:12:39.356 "zcopy": false, 00:12:39.356 "get_zone_info": false, 00:12:39.356 "zone_management": false, 00:12:39.356 "zone_append": false, 00:12:39.356 "compare": false, 00:12:39.356 "compare_and_write": false, 00:12:39.356 "abort": false, 00:12:39.356 "seek_hole": false, 00:12:39.356 "seek_data": false, 00:12:39.356 "copy": false, 00:12:39.356 "nvme_iov_md": false 00:12:39.356 }, 00:12:39.356 "memory_domains": [ 00:12:39.356 { 00:12:39.356 "dma_device_id": "system", 00:12:39.356 "dma_device_type": 1 00:12:39.356 }, 00:12:39.356 { 00:12:39.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.356 "dma_device_type": 2 00:12:39.356 }, 00:12:39.356 { 00:12:39.356 "dma_device_id": "system", 00:12:39.356 "dma_device_type": 1 00:12:39.356 }, 00:12:39.356 { 00:12:39.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.356 "dma_device_type": 2 00:12:39.356 }, 00:12:39.356 { 00:12:39.356 "dma_device_id": "system", 00:12:39.356 "dma_device_type": 1 00:12:39.356 }, 00:12:39.356 { 00:12:39.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.356 "dma_device_type": 2 00:12:39.356 } 00:12:39.356 ], 00:12:39.356 "driver_specific": { 00:12:39.356 "raid": { 00:12:39.356 "uuid": "450e88d6-6a3e-4fc9-b796-1e9312620239", 00:12:39.356 "strip_size_kb": 64, 00:12:39.356 "state": "online", 00:12:39.356 "raid_level": "raid0", 00:12:39.356 "superblock": false, 00:12:39.356 "num_base_bdevs": 3, 00:12:39.356 "num_base_bdevs_discovered": 3, 00:12:39.356 "num_base_bdevs_operational": 3, 00:12:39.356 "base_bdevs_list": [ 00:12:39.356 { 00:12:39.356 "name": "BaseBdev1", 00:12:39.356 "uuid": "eec4e505-e33d-436f-8c9e-666fd914c22b", 00:12:39.356 "is_configured": true, 00:12:39.356 "data_offset": 0, 00:12:39.356 "data_size": 65536 00:12:39.356 }, 00:12:39.356 { 00:12:39.356 "name": "BaseBdev2", 00:12:39.356 "uuid": "844c39f5-4b4a-4c14-a645-3b1390870f56", 00:12:39.356 "is_configured": true, 00:12:39.356 "data_offset": 0, 00:12:39.356 "data_size": 65536 00:12:39.356 }, 00:12:39.356 { 00:12:39.356 "name": "BaseBdev3", 00:12:39.356 "uuid": "a444b26d-f799-4a56-8d51-08b542840760", 00:12:39.356 "is_configured": true, 00:12:39.356 "data_offset": 0, 00:12:39.356 "data_size": 65536 00:12:39.356 } 00:12:39.356 ] 00:12:39.356 } 00:12:39.356 } 00:12:39.356 }' 00:12:39.614 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:39.615 BaseBdev2 00:12:39.615 BaseBdev3' 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.615 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.873 [2024-11-27 14:12:10.571367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:39.873 [2024-11-27 14:12:10.571398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.873 [2024-11-27 14:12:10.571457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.873 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.873 "name": "Existed_Raid", 00:12:39.873 "uuid": "450e88d6-6a3e-4fc9-b796-1e9312620239", 00:12:39.873 "strip_size_kb": 64, 00:12:39.873 "state": "offline", 00:12:39.873 "raid_level": "raid0", 00:12:39.873 "superblock": false, 00:12:39.874 "num_base_bdevs": 3, 00:12:39.874 "num_base_bdevs_discovered": 2, 00:12:39.874 "num_base_bdevs_operational": 2, 00:12:39.874 "base_bdevs_list": [ 00:12:39.874 { 00:12:39.874 "name": null, 00:12:39.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.874 "is_configured": false, 00:12:39.874 "data_offset": 0, 00:12:39.874 "data_size": 65536 00:12:39.874 }, 00:12:39.874 { 00:12:39.874 "name": "BaseBdev2", 00:12:39.874 "uuid": "844c39f5-4b4a-4c14-a645-3b1390870f56", 00:12:39.874 "is_configured": true, 00:12:39.874 "data_offset": 0, 00:12:39.874 "data_size": 65536 00:12:39.874 }, 00:12:39.874 { 00:12:39.874 "name": "BaseBdev3", 00:12:39.874 "uuid": "a444b26d-f799-4a56-8d51-08b542840760", 00:12:39.874 "is_configured": true, 00:12:39.874 "data_offset": 0, 00:12:39.874 "data_size": 65536 00:12:39.874 } 00:12:39.874 ] 00:12:39.874 }' 00:12:39.874 14:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.874 14:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.439 [2024-11-27 14:12:11.186689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.439 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.439 [2024-11-27 14:12:11.361083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:40.439 [2024-11-27 14:12:11.361217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:40.699 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.699 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:40.699 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:40.699 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.700 BaseBdev2 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.700 [ 00:12:40.700 { 00:12:40.700 "name": "BaseBdev2", 00:12:40.700 "aliases": [ 00:12:40.700 "7bed164e-1e6f-48ca-bc73-ce48b0f9c4cf" 00:12:40.700 ], 00:12:40.700 "product_name": "Malloc disk", 00:12:40.700 "block_size": 512, 00:12:40.700 "num_blocks": 65536, 00:12:40.700 "uuid": "7bed164e-1e6f-48ca-bc73-ce48b0f9c4cf", 00:12:40.700 "assigned_rate_limits": { 00:12:40.700 "rw_ios_per_sec": 0, 00:12:40.700 "rw_mbytes_per_sec": 0, 00:12:40.700 "r_mbytes_per_sec": 0, 00:12:40.700 "w_mbytes_per_sec": 0 00:12:40.700 }, 00:12:40.700 "claimed": false, 00:12:40.700 "zoned": false, 00:12:40.700 "supported_io_types": { 00:12:40.700 "read": true, 00:12:40.700 "write": true, 00:12:40.700 "unmap": true, 00:12:40.700 "flush": true, 00:12:40.700 "reset": true, 00:12:40.700 "nvme_admin": false, 00:12:40.700 "nvme_io": false, 00:12:40.700 "nvme_io_md": false, 00:12:40.700 "write_zeroes": true, 00:12:40.700 "zcopy": true, 00:12:40.700 "get_zone_info": false, 00:12:40.700 "zone_management": false, 00:12:40.700 "zone_append": false, 00:12:40.700 "compare": false, 00:12:40.700 "compare_and_write": false, 00:12:40.700 "abort": true, 00:12:40.700 "seek_hole": false, 00:12:40.700 "seek_data": false, 00:12:40.700 "copy": true, 00:12:40.700 "nvme_iov_md": false 00:12:40.700 }, 00:12:40.700 "memory_domains": [ 00:12:40.700 { 00:12:40.700 "dma_device_id": "system", 00:12:40.700 "dma_device_type": 1 00:12:40.700 }, 00:12:40.700 { 00:12:40.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.700 "dma_device_type": 2 00:12:40.700 } 00:12:40.700 ], 00:12:40.700 "driver_specific": {} 00:12:40.700 } 00:12:40.700 ] 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.700 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.700 BaseBdev3 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.960 [ 00:12:40.960 { 00:12:40.960 "name": "BaseBdev3", 00:12:40.960 "aliases": [ 00:12:40.960 "e9fc108a-0d73-4968-b3a0-e8f5a86129f4" 00:12:40.960 ], 00:12:40.960 "product_name": "Malloc disk", 00:12:40.960 "block_size": 512, 00:12:40.960 "num_blocks": 65536, 00:12:40.960 "uuid": "e9fc108a-0d73-4968-b3a0-e8f5a86129f4", 00:12:40.960 "assigned_rate_limits": { 00:12:40.960 "rw_ios_per_sec": 0, 00:12:40.960 "rw_mbytes_per_sec": 0, 00:12:40.960 "r_mbytes_per_sec": 0, 00:12:40.960 "w_mbytes_per_sec": 0 00:12:40.960 }, 00:12:40.960 "claimed": false, 00:12:40.960 "zoned": false, 00:12:40.960 "supported_io_types": { 00:12:40.960 "read": true, 00:12:40.960 "write": true, 00:12:40.960 "unmap": true, 00:12:40.960 "flush": true, 00:12:40.960 "reset": true, 00:12:40.960 "nvme_admin": false, 00:12:40.960 "nvme_io": false, 00:12:40.960 "nvme_io_md": false, 00:12:40.960 "write_zeroes": true, 00:12:40.960 "zcopy": true, 00:12:40.960 "get_zone_info": false, 00:12:40.960 "zone_management": false, 00:12:40.960 "zone_append": false, 00:12:40.960 "compare": false, 00:12:40.960 "compare_and_write": false, 00:12:40.960 "abort": true, 00:12:40.960 "seek_hole": false, 00:12:40.960 "seek_data": false, 00:12:40.960 "copy": true, 00:12:40.960 "nvme_iov_md": false 00:12:40.960 }, 00:12:40.960 "memory_domains": [ 00:12:40.960 { 00:12:40.960 "dma_device_id": "system", 00:12:40.960 "dma_device_type": 1 00:12:40.960 }, 00:12:40.960 { 00:12:40.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.960 "dma_device_type": 2 00:12:40.960 } 00:12:40.960 ], 00:12:40.960 "driver_specific": {} 00:12:40.960 } 00:12:40.960 ] 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.960 [2024-11-27 14:12:11.704340] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:40.960 [2024-11-27 14:12:11.704492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:40.960 [2024-11-27 14:12:11.704552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.960 [2024-11-27 14:12:11.706685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:40.960 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.961 "name": "Existed_Raid", 00:12:40.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.961 "strip_size_kb": 64, 00:12:40.961 "state": "configuring", 00:12:40.961 "raid_level": "raid0", 00:12:40.961 "superblock": false, 00:12:40.961 "num_base_bdevs": 3, 00:12:40.961 "num_base_bdevs_discovered": 2, 00:12:40.961 "num_base_bdevs_operational": 3, 00:12:40.961 "base_bdevs_list": [ 00:12:40.961 { 00:12:40.961 "name": "BaseBdev1", 00:12:40.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.961 "is_configured": false, 00:12:40.961 "data_offset": 0, 00:12:40.961 "data_size": 0 00:12:40.961 }, 00:12:40.961 { 00:12:40.961 "name": "BaseBdev2", 00:12:40.961 "uuid": "7bed164e-1e6f-48ca-bc73-ce48b0f9c4cf", 00:12:40.961 "is_configured": true, 00:12:40.961 "data_offset": 0, 00:12:40.961 "data_size": 65536 00:12:40.961 }, 00:12:40.961 { 00:12:40.961 "name": "BaseBdev3", 00:12:40.961 "uuid": "e9fc108a-0d73-4968-b3a0-e8f5a86129f4", 00:12:40.961 "is_configured": true, 00:12:40.961 "data_offset": 0, 00:12:40.961 "data_size": 65536 00:12:40.961 } 00:12:40.961 ] 00:12:40.961 }' 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.961 14:12:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.220 [2024-11-27 14:12:12.155590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.220 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.480 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.480 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.480 "name": "Existed_Raid", 00:12:41.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.480 "strip_size_kb": 64, 00:12:41.480 "state": "configuring", 00:12:41.480 "raid_level": "raid0", 00:12:41.480 "superblock": false, 00:12:41.480 "num_base_bdevs": 3, 00:12:41.480 "num_base_bdevs_discovered": 1, 00:12:41.480 "num_base_bdevs_operational": 3, 00:12:41.480 "base_bdevs_list": [ 00:12:41.480 { 00:12:41.480 "name": "BaseBdev1", 00:12:41.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.480 "is_configured": false, 00:12:41.480 "data_offset": 0, 00:12:41.480 "data_size": 0 00:12:41.480 }, 00:12:41.480 { 00:12:41.480 "name": null, 00:12:41.480 "uuid": "7bed164e-1e6f-48ca-bc73-ce48b0f9c4cf", 00:12:41.480 "is_configured": false, 00:12:41.480 "data_offset": 0, 00:12:41.480 "data_size": 65536 00:12:41.480 }, 00:12:41.480 { 00:12:41.480 "name": "BaseBdev3", 00:12:41.480 "uuid": "e9fc108a-0d73-4968-b3a0-e8f5a86129f4", 00:12:41.480 "is_configured": true, 00:12:41.480 "data_offset": 0, 00:12:41.480 "data_size": 65536 00:12:41.480 } 00:12:41.480 ] 00:12:41.480 }' 00:12:41.480 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.480 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.739 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.739 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.739 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.739 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:41.739 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.997 [2024-11-27 14:12:12.742737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.997 BaseBdev1 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.997 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.997 [ 00:12:41.997 { 00:12:41.997 "name": "BaseBdev1", 00:12:41.997 "aliases": [ 00:12:41.997 "74593afc-a184-403a-828d-06461ed04dd8" 00:12:41.997 ], 00:12:41.997 "product_name": "Malloc disk", 00:12:41.997 "block_size": 512, 00:12:41.997 "num_blocks": 65536, 00:12:41.997 "uuid": "74593afc-a184-403a-828d-06461ed04dd8", 00:12:41.997 "assigned_rate_limits": { 00:12:41.997 "rw_ios_per_sec": 0, 00:12:41.997 "rw_mbytes_per_sec": 0, 00:12:41.997 "r_mbytes_per_sec": 0, 00:12:41.997 "w_mbytes_per_sec": 0 00:12:41.997 }, 00:12:41.997 "claimed": true, 00:12:41.997 "claim_type": "exclusive_write", 00:12:41.998 "zoned": false, 00:12:41.998 "supported_io_types": { 00:12:41.998 "read": true, 00:12:41.998 "write": true, 00:12:41.998 "unmap": true, 00:12:41.998 "flush": true, 00:12:41.998 "reset": true, 00:12:41.998 "nvme_admin": false, 00:12:41.998 "nvme_io": false, 00:12:41.998 "nvme_io_md": false, 00:12:41.998 "write_zeroes": true, 00:12:41.998 "zcopy": true, 00:12:41.998 "get_zone_info": false, 00:12:41.998 "zone_management": false, 00:12:41.998 "zone_append": false, 00:12:41.998 "compare": false, 00:12:41.998 "compare_and_write": false, 00:12:41.998 "abort": true, 00:12:41.998 "seek_hole": false, 00:12:41.998 "seek_data": false, 00:12:41.998 "copy": true, 00:12:41.998 "nvme_iov_md": false 00:12:41.998 }, 00:12:41.998 "memory_domains": [ 00:12:41.998 { 00:12:41.998 "dma_device_id": "system", 00:12:41.998 "dma_device_type": 1 00:12:41.998 }, 00:12:41.998 { 00:12:41.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.998 "dma_device_type": 2 00:12:41.998 } 00:12:41.998 ], 00:12:41.998 "driver_specific": {} 00:12:41.998 } 00:12:41.998 ] 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.998 "name": "Existed_Raid", 00:12:41.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.998 "strip_size_kb": 64, 00:12:41.998 "state": "configuring", 00:12:41.998 "raid_level": "raid0", 00:12:41.998 "superblock": false, 00:12:41.998 "num_base_bdevs": 3, 00:12:41.998 "num_base_bdevs_discovered": 2, 00:12:41.998 "num_base_bdevs_operational": 3, 00:12:41.998 "base_bdevs_list": [ 00:12:41.998 { 00:12:41.998 "name": "BaseBdev1", 00:12:41.998 "uuid": "74593afc-a184-403a-828d-06461ed04dd8", 00:12:41.998 "is_configured": true, 00:12:41.998 "data_offset": 0, 00:12:41.998 "data_size": 65536 00:12:41.998 }, 00:12:41.998 { 00:12:41.998 "name": null, 00:12:41.998 "uuid": "7bed164e-1e6f-48ca-bc73-ce48b0f9c4cf", 00:12:41.998 "is_configured": false, 00:12:41.998 "data_offset": 0, 00:12:41.998 "data_size": 65536 00:12:41.998 }, 00:12:41.998 { 00:12:41.998 "name": "BaseBdev3", 00:12:41.998 "uuid": "e9fc108a-0d73-4968-b3a0-e8f5a86129f4", 00:12:41.998 "is_configured": true, 00:12:41.998 "data_offset": 0, 00:12:41.998 "data_size": 65536 00:12:41.998 } 00:12:41.998 ] 00:12:41.998 }' 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.998 14:12:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.577 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:42.577 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.577 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.577 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.577 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.577 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:42.577 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.578 [2024-11-27 14:12:13.305902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.578 "name": "Existed_Raid", 00:12:42.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.578 "strip_size_kb": 64, 00:12:42.578 "state": "configuring", 00:12:42.578 "raid_level": "raid0", 00:12:42.578 "superblock": false, 00:12:42.578 "num_base_bdevs": 3, 00:12:42.578 "num_base_bdevs_discovered": 1, 00:12:42.578 "num_base_bdevs_operational": 3, 00:12:42.578 "base_bdevs_list": [ 00:12:42.578 { 00:12:42.578 "name": "BaseBdev1", 00:12:42.578 "uuid": "74593afc-a184-403a-828d-06461ed04dd8", 00:12:42.578 "is_configured": true, 00:12:42.578 "data_offset": 0, 00:12:42.578 "data_size": 65536 00:12:42.578 }, 00:12:42.578 { 00:12:42.578 "name": null, 00:12:42.578 "uuid": "7bed164e-1e6f-48ca-bc73-ce48b0f9c4cf", 00:12:42.578 "is_configured": false, 00:12:42.578 "data_offset": 0, 00:12:42.578 "data_size": 65536 00:12:42.578 }, 00:12:42.578 { 00:12:42.578 "name": null, 00:12:42.578 "uuid": "e9fc108a-0d73-4968-b3a0-e8f5a86129f4", 00:12:42.578 "is_configured": false, 00:12:42.578 "data_offset": 0, 00:12:42.578 "data_size": 65536 00:12:42.578 } 00:12:42.578 ] 00:12:42.578 }' 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.578 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.836 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.836 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:42.836 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.836 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.096 [2024-11-27 14:12:13.825077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.096 "name": "Existed_Raid", 00:12:43.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.096 "strip_size_kb": 64, 00:12:43.096 "state": "configuring", 00:12:43.096 "raid_level": "raid0", 00:12:43.096 "superblock": false, 00:12:43.096 "num_base_bdevs": 3, 00:12:43.096 "num_base_bdevs_discovered": 2, 00:12:43.096 "num_base_bdevs_operational": 3, 00:12:43.096 "base_bdevs_list": [ 00:12:43.096 { 00:12:43.096 "name": "BaseBdev1", 00:12:43.096 "uuid": "74593afc-a184-403a-828d-06461ed04dd8", 00:12:43.096 "is_configured": true, 00:12:43.096 "data_offset": 0, 00:12:43.096 "data_size": 65536 00:12:43.096 }, 00:12:43.096 { 00:12:43.096 "name": null, 00:12:43.096 "uuid": "7bed164e-1e6f-48ca-bc73-ce48b0f9c4cf", 00:12:43.096 "is_configured": false, 00:12:43.096 "data_offset": 0, 00:12:43.096 "data_size": 65536 00:12:43.096 }, 00:12:43.096 { 00:12:43.096 "name": "BaseBdev3", 00:12:43.096 "uuid": "e9fc108a-0d73-4968-b3a0-e8f5a86129f4", 00:12:43.096 "is_configured": true, 00:12:43.096 "data_offset": 0, 00:12:43.096 "data_size": 65536 00:12:43.096 } 00:12:43.096 ] 00:12:43.096 }' 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.096 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.354 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.354 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.354 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.354 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:43.354 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.614 [2024-11-27 14:12:14.332301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.614 "name": "Existed_Raid", 00:12:43.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.614 "strip_size_kb": 64, 00:12:43.614 "state": "configuring", 00:12:43.614 "raid_level": "raid0", 00:12:43.614 "superblock": false, 00:12:43.614 "num_base_bdevs": 3, 00:12:43.614 "num_base_bdevs_discovered": 1, 00:12:43.614 "num_base_bdevs_operational": 3, 00:12:43.614 "base_bdevs_list": [ 00:12:43.614 { 00:12:43.614 "name": null, 00:12:43.614 "uuid": "74593afc-a184-403a-828d-06461ed04dd8", 00:12:43.614 "is_configured": false, 00:12:43.614 "data_offset": 0, 00:12:43.614 "data_size": 65536 00:12:43.614 }, 00:12:43.614 { 00:12:43.614 "name": null, 00:12:43.614 "uuid": "7bed164e-1e6f-48ca-bc73-ce48b0f9c4cf", 00:12:43.614 "is_configured": false, 00:12:43.614 "data_offset": 0, 00:12:43.614 "data_size": 65536 00:12:43.614 }, 00:12:43.614 { 00:12:43.614 "name": "BaseBdev3", 00:12:43.614 "uuid": "e9fc108a-0d73-4968-b3a0-e8f5a86129f4", 00:12:43.614 "is_configured": true, 00:12:43.614 "data_offset": 0, 00:12:43.614 "data_size": 65536 00:12:43.614 } 00:12:43.614 ] 00:12:43.614 }' 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.614 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.181 [2024-11-27 14:12:14.946758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.181 "name": "Existed_Raid", 00:12:44.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.181 "strip_size_kb": 64, 00:12:44.181 "state": "configuring", 00:12:44.181 "raid_level": "raid0", 00:12:44.181 "superblock": false, 00:12:44.181 "num_base_bdevs": 3, 00:12:44.181 "num_base_bdevs_discovered": 2, 00:12:44.181 "num_base_bdevs_operational": 3, 00:12:44.181 "base_bdevs_list": [ 00:12:44.181 { 00:12:44.181 "name": null, 00:12:44.181 "uuid": "74593afc-a184-403a-828d-06461ed04dd8", 00:12:44.181 "is_configured": false, 00:12:44.181 "data_offset": 0, 00:12:44.181 "data_size": 65536 00:12:44.181 }, 00:12:44.181 { 00:12:44.181 "name": "BaseBdev2", 00:12:44.181 "uuid": "7bed164e-1e6f-48ca-bc73-ce48b0f9c4cf", 00:12:44.181 "is_configured": true, 00:12:44.181 "data_offset": 0, 00:12:44.181 "data_size": 65536 00:12:44.181 }, 00:12:44.181 { 00:12:44.181 "name": "BaseBdev3", 00:12:44.181 "uuid": "e9fc108a-0d73-4968-b3a0-e8f5a86129f4", 00:12:44.181 "is_configured": true, 00:12:44.181 "data_offset": 0, 00:12:44.181 "data_size": 65536 00:12:44.181 } 00:12:44.181 ] 00:12:44.181 }' 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.181 14:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 74593afc-a184-403a-828d-06461ed04dd8 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.749 [2024-11-27 14:12:15.576664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:44.749 [2024-11-27 14:12:15.576718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:44.749 [2024-11-27 14:12:15.576728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:44.749 [2024-11-27 14:12:15.576977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:44.749 [2024-11-27 14:12:15.577170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:44.749 [2024-11-27 14:12:15.577183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:44.749 [2024-11-27 14:12:15.577454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.749 NewBaseBdev 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.749 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.749 [ 00:12:44.749 { 00:12:44.749 "name": "NewBaseBdev", 00:12:44.749 "aliases": [ 00:12:44.749 "74593afc-a184-403a-828d-06461ed04dd8" 00:12:44.749 ], 00:12:44.749 "product_name": "Malloc disk", 00:12:44.749 "block_size": 512, 00:12:44.749 "num_blocks": 65536, 00:12:44.749 "uuid": "74593afc-a184-403a-828d-06461ed04dd8", 00:12:44.749 "assigned_rate_limits": { 00:12:44.749 "rw_ios_per_sec": 0, 00:12:44.749 "rw_mbytes_per_sec": 0, 00:12:44.749 "r_mbytes_per_sec": 0, 00:12:44.749 "w_mbytes_per_sec": 0 00:12:44.749 }, 00:12:44.749 "claimed": true, 00:12:44.749 "claim_type": "exclusive_write", 00:12:44.749 "zoned": false, 00:12:44.749 "supported_io_types": { 00:12:44.749 "read": true, 00:12:44.749 "write": true, 00:12:44.749 "unmap": true, 00:12:44.749 "flush": true, 00:12:44.749 "reset": true, 00:12:44.749 "nvme_admin": false, 00:12:44.749 "nvme_io": false, 00:12:44.749 "nvme_io_md": false, 00:12:44.749 "write_zeroes": true, 00:12:44.749 "zcopy": true, 00:12:44.749 "get_zone_info": false, 00:12:44.749 "zone_management": false, 00:12:44.749 "zone_append": false, 00:12:44.749 "compare": false, 00:12:44.749 "compare_and_write": false, 00:12:44.749 "abort": true, 00:12:44.749 "seek_hole": false, 00:12:44.749 "seek_data": false, 00:12:44.749 "copy": true, 00:12:44.749 "nvme_iov_md": false 00:12:44.749 }, 00:12:44.749 "memory_domains": [ 00:12:44.749 { 00:12:44.750 "dma_device_id": "system", 00:12:44.750 "dma_device_type": 1 00:12:44.750 }, 00:12:44.750 { 00:12:44.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.750 "dma_device_type": 2 00:12:44.750 } 00:12:44.750 ], 00:12:44.750 "driver_specific": {} 00:12:44.750 } 00:12:44.750 ] 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.750 "name": "Existed_Raid", 00:12:44.750 "uuid": "ae600e5e-e049-4bb3-833a-b1471fe7ab7c", 00:12:44.750 "strip_size_kb": 64, 00:12:44.750 "state": "online", 00:12:44.750 "raid_level": "raid0", 00:12:44.750 "superblock": false, 00:12:44.750 "num_base_bdevs": 3, 00:12:44.750 "num_base_bdevs_discovered": 3, 00:12:44.750 "num_base_bdevs_operational": 3, 00:12:44.750 "base_bdevs_list": [ 00:12:44.750 { 00:12:44.750 "name": "NewBaseBdev", 00:12:44.750 "uuid": "74593afc-a184-403a-828d-06461ed04dd8", 00:12:44.750 "is_configured": true, 00:12:44.750 "data_offset": 0, 00:12:44.750 "data_size": 65536 00:12:44.750 }, 00:12:44.750 { 00:12:44.750 "name": "BaseBdev2", 00:12:44.750 "uuid": "7bed164e-1e6f-48ca-bc73-ce48b0f9c4cf", 00:12:44.750 "is_configured": true, 00:12:44.750 "data_offset": 0, 00:12:44.750 "data_size": 65536 00:12:44.750 }, 00:12:44.750 { 00:12:44.750 "name": "BaseBdev3", 00:12:44.750 "uuid": "e9fc108a-0d73-4968-b3a0-e8f5a86129f4", 00:12:44.750 "is_configured": true, 00:12:44.750 "data_offset": 0, 00:12:44.750 "data_size": 65536 00:12:44.750 } 00:12:44.750 ] 00:12:44.750 }' 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.750 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.317 [2024-11-27 14:12:16.112370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:45.317 "name": "Existed_Raid", 00:12:45.317 "aliases": [ 00:12:45.317 "ae600e5e-e049-4bb3-833a-b1471fe7ab7c" 00:12:45.317 ], 00:12:45.317 "product_name": "Raid Volume", 00:12:45.317 "block_size": 512, 00:12:45.317 "num_blocks": 196608, 00:12:45.317 "uuid": "ae600e5e-e049-4bb3-833a-b1471fe7ab7c", 00:12:45.317 "assigned_rate_limits": { 00:12:45.317 "rw_ios_per_sec": 0, 00:12:45.317 "rw_mbytes_per_sec": 0, 00:12:45.317 "r_mbytes_per_sec": 0, 00:12:45.317 "w_mbytes_per_sec": 0 00:12:45.317 }, 00:12:45.317 "claimed": false, 00:12:45.317 "zoned": false, 00:12:45.317 "supported_io_types": { 00:12:45.317 "read": true, 00:12:45.317 "write": true, 00:12:45.317 "unmap": true, 00:12:45.317 "flush": true, 00:12:45.317 "reset": true, 00:12:45.317 "nvme_admin": false, 00:12:45.317 "nvme_io": false, 00:12:45.317 "nvme_io_md": false, 00:12:45.317 "write_zeroes": true, 00:12:45.317 "zcopy": false, 00:12:45.317 "get_zone_info": false, 00:12:45.317 "zone_management": false, 00:12:45.317 "zone_append": false, 00:12:45.317 "compare": false, 00:12:45.317 "compare_and_write": false, 00:12:45.317 "abort": false, 00:12:45.317 "seek_hole": false, 00:12:45.317 "seek_data": false, 00:12:45.317 "copy": false, 00:12:45.317 "nvme_iov_md": false 00:12:45.317 }, 00:12:45.317 "memory_domains": [ 00:12:45.317 { 00:12:45.317 "dma_device_id": "system", 00:12:45.317 "dma_device_type": 1 00:12:45.317 }, 00:12:45.317 { 00:12:45.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.317 "dma_device_type": 2 00:12:45.317 }, 00:12:45.317 { 00:12:45.317 "dma_device_id": "system", 00:12:45.317 "dma_device_type": 1 00:12:45.317 }, 00:12:45.317 { 00:12:45.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.317 "dma_device_type": 2 00:12:45.317 }, 00:12:45.317 { 00:12:45.317 "dma_device_id": "system", 00:12:45.317 "dma_device_type": 1 00:12:45.317 }, 00:12:45.317 { 00:12:45.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.317 "dma_device_type": 2 00:12:45.317 } 00:12:45.317 ], 00:12:45.317 "driver_specific": { 00:12:45.317 "raid": { 00:12:45.317 "uuid": "ae600e5e-e049-4bb3-833a-b1471fe7ab7c", 00:12:45.317 "strip_size_kb": 64, 00:12:45.317 "state": "online", 00:12:45.317 "raid_level": "raid0", 00:12:45.317 "superblock": false, 00:12:45.317 "num_base_bdevs": 3, 00:12:45.317 "num_base_bdevs_discovered": 3, 00:12:45.317 "num_base_bdevs_operational": 3, 00:12:45.317 "base_bdevs_list": [ 00:12:45.317 { 00:12:45.317 "name": "NewBaseBdev", 00:12:45.317 "uuid": "74593afc-a184-403a-828d-06461ed04dd8", 00:12:45.317 "is_configured": true, 00:12:45.317 "data_offset": 0, 00:12:45.317 "data_size": 65536 00:12:45.317 }, 00:12:45.317 { 00:12:45.317 "name": "BaseBdev2", 00:12:45.317 "uuid": "7bed164e-1e6f-48ca-bc73-ce48b0f9c4cf", 00:12:45.317 "is_configured": true, 00:12:45.317 "data_offset": 0, 00:12:45.317 "data_size": 65536 00:12:45.317 }, 00:12:45.317 { 00:12:45.317 "name": "BaseBdev3", 00:12:45.317 "uuid": "e9fc108a-0d73-4968-b3a0-e8f5a86129f4", 00:12:45.317 "is_configured": true, 00:12:45.317 "data_offset": 0, 00:12:45.317 "data_size": 65536 00:12:45.317 } 00:12:45.317 ] 00:12:45.317 } 00:12:45.317 } 00:12:45.317 }' 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:45.317 BaseBdev2 00:12:45.317 BaseBdev3' 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.317 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.576 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.576 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.576 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.576 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.577 [2024-11-27 14:12:16.403493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:45.577 [2024-11-27 14:12:16.403522] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.577 [2024-11-27 14:12:16.403616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.577 [2024-11-27 14:12:16.403675] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.577 [2024-11-27 14:12:16.403688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64003 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64003 ']' 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64003 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64003 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64003' 00:12:45.577 killing process with pid 64003 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64003 00:12:45.577 [2024-11-27 14:12:16.452201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.577 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64003 00:12:45.836 [2024-11-27 14:12:16.779659] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.242 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:47.242 00:12:47.242 real 0m11.251s 00:12:47.242 user 0m17.935s 00:12:47.242 sys 0m1.894s 00:12:47.242 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.242 ************************************ 00:12:47.242 END TEST raid_state_function_test 00:12:47.242 ************************************ 00:12:47.242 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.242 14:12:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:12:47.242 14:12:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:47.242 14:12:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.242 14:12:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:47.242 ************************************ 00:12:47.242 START TEST raid_state_function_test_sb 00:12:47.242 ************************************ 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64635 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64635' 00:12:47.242 Process raid pid: 64635 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64635 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64635 ']' 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.242 14:12:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.242 [2024-11-27 14:12:18.154422] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:47.242 [2024-11-27 14:12:18.154563] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.502 [2024-11-27 14:12:18.332890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.762 [2024-11-27 14:12:18.456615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.762 [2024-11-27 14:12:18.671630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.762 [2024-11-27 14:12:18.671673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.331 [2024-11-27 14:12:19.071188] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.331 [2024-11-27 14:12:19.071246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.331 [2024-11-27 14:12:19.071264] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.331 [2024-11-27 14:12:19.071275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.331 [2024-11-27 14:12:19.071283] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.331 [2024-11-27 14:12:19.071293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.331 "name": "Existed_Raid", 00:12:48.331 "uuid": "733e0fc7-3915-445f-ac23-f07bd081f525", 00:12:48.331 "strip_size_kb": 64, 00:12:48.331 "state": "configuring", 00:12:48.331 "raid_level": "raid0", 00:12:48.331 "superblock": true, 00:12:48.331 "num_base_bdevs": 3, 00:12:48.331 "num_base_bdevs_discovered": 0, 00:12:48.331 "num_base_bdevs_operational": 3, 00:12:48.331 "base_bdevs_list": [ 00:12:48.331 { 00:12:48.331 "name": "BaseBdev1", 00:12:48.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.331 "is_configured": false, 00:12:48.331 "data_offset": 0, 00:12:48.331 "data_size": 0 00:12:48.331 }, 00:12:48.331 { 00:12:48.331 "name": "BaseBdev2", 00:12:48.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.331 "is_configured": false, 00:12:48.331 "data_offset": 0, 00:12:48.331 "data_size": 0 00:12:48.331 }, 00:12:48.331 { 00:12:48.331 "name": "BaseBdev3", 00:12:48.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.331 "is_configured": false, 00:12:48.331 "data_offset": 0, 00:12:48.331 "data_size": 0 00:12:48.331 } 00:12:48.331 ] 00:12:48.331 }' 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.331 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.590 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:48.590 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.590 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.590 [2024-11-27 14:12:19.534299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.590 [2024-11-27 14:12:19.534418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:48.591 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.591 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:48.591 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.591 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.849 [2024-11-27 14:12:19.546317] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.849 [2024-11-27 14:12:19.546412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.849 [2024-11-27 14:12:19.546447] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.849 [2024-11-27 14:12:19.546473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.849 [2024-11-27 14:12:19.546500] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.849 [2024-11-27 14:12:19.546526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.849 [2024-11-27 14:12:19.596857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.849 BaseBdev1 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.849 [ 00:12:48.849 { 00:12:48.849 "name": "BaseBdev1", 00:12:48.849 "aliases": [ 00:12:48.849 "44b185ba-8b76-419f-b1c7-c8aa3afcde96" 00:12:48.849 ], 00:12:48.849 "product_name": "Malloc disk", 00:12:48.849 "block_size": 512, 00:12:48.849 "num_blocks": 65536, 00:12:48.849 "uuid": "44b185ba-8b76-419f-b1c7-c8aa3afcde96", 00:12:48.849 "assigned_rate_limits": { 00:12:48.849 "rw_ios_per_sec": 0, 00:12:48.849 "rw_mbytes_per_sec": 0, 00:12:48.849 "r_mbytes_per_sec": 0, 00:12:48.849 "w_mbytes_per_sec": 0 00:12:48.849 }, 00:12:48.849 "claimed": true, 00:12:48.849 "claim_type": "exclusive_write", 00:12:48.849 "zoned": false, 00:12:48.849 "supported_io_types": { 00:12:48.849 "read": true, 00:12:48.849 "write": true, 00:12:48.849 "unmap": true, 00:12:48.849 "flush": true, 00:12:48.849 "reset": true, 00:12:48.849 "nvme_admin": false, 00:12:48.849 "nvme_io": false, 00:12:48.849 "nvme_io_md": false, 00:12:48.849 "write_zeroes": true, 00:12:48.849 "zcopy": true, 00:12:48.849 "get_zone_info": false, 00:12:48.849 "zone_management": false, 00:12:48.849 "zone_append": false, 00:12:48.849 "compare": false, 00:12:48.849 "compare_and_write": false, 00:12:48.849 "abort": true, 00:12:48.849 "seek_hole": false, 00:12:48.849 "seek_data": false, 00:12:48.849 "copy": true, 00:12:48.849 "nvme_iov_md": false 00:12:48.849 }, 00:12:48.849 "memory_domains": [ 00:12:48.849 { 00:12:48.849 "dma_device_id": "system", 00:12:48.849 "dma_device_type": 1 00:12:48.849 }, 00:12:48.849 { 00:12:48.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.849 "dma_device_type": 2 00:12:48.849 } 00:12:48.849 ], 00:12:48.849 "driver_specific": {} 00:12:48.849 } 00:12:48.849 ] 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.849 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.850 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.850 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.850 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.850 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.850 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.850 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.850 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.850 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.850 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.850 "name": "Existed_Raid", 00:12:48.850 "uuid": "7dd7c742-1ce8-41f3-a595-bf8d1a63fe94", 00:12:48.850 "strip_size_kb": 64, 00:12:48.850 "state": "configuring", 00:12:48.850 "raid_level": "raid0", 00:12:48.850 "superblock": true, 00:12:48.850 "num_base_bdevs": 3, 00:12:48.850 "num_base_bdevs_discovered": 1, 00:12:48.850 "num_base_bdevs_operational": 3, 00:12:48.850 "base_bdevs_list": [ 00:12:48.850 { 00:12:48.850 "name": "BaseBdev1", 00:12:48.850 "uuid": "44b185ba-8b76-419f-b1c7-c8aa3afcde96", 00:12:48.850 "is_configured": true, 00:12:48.850 "data_offset": 2048, 00:12:48.850 "data_size": 63488 00:12:48.850 }, 00:12:48.850 { 00:12:48.850 "name": "BaseBdev2", 00:12:48.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.850 "is_configured": false, 00:12:48.850 "data_offset": 0, 00:12:48.850 "data_size": 0 00:12:48.850 }, 00:12:48.850 { 00:12:48.850 "name": "BaseBdev3", 00:12:48.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.850 "is_configured": false, 00:12:48.850 "data_offset": 0, 00:12:48.850 "data_size": 0 00:12:48.850 } 00:12:48.850 ] 00:12:48.850 }' 00:12:48.850 14:12:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.850 14:12:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.418 [2024-11-27 14:12:20.092259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.418 [2024-11-27 14:12:20.092388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.418 [2024-11-27 14:12:20.100313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.418 [2024-11-27 14:12:20.102474] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.418 [2024-11-27 14:12:20.102524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.418 [2024-11-27 14:12:20.102536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:49.418 [2024-11-27 14:12:20.102548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.418 "name": "Existed_Raid", 00:12:49.418 "uuid": "da1bcf38-5a00-48b2-a18f-37e9248b291d", 00:12:49.418 "strip_size_kb": 64, 00:12:49.418 "state": "configuring", 00:12:49.418 "raid_level": "raid0", 00:12:49.418 "superblock": true, 00:12:49.418 "num_base_bdevs": 3, 00:12:49.418 "num_base_bdevs_discovered": 1, 00:12:49.418 "num_base_bdevs_operational": 3, 00:12:49.418 "base_bdevs_list": [ 00:12:49.418 { 00:12:49.418 "name": "BaseBdev1", 00:12:49.418 "uuid": "44b185ba-8b76-419f-b1c7-c8aa3afcde96", 00:12:49.418 "is_configured": true, 00:12:49.418 "data_offset": 2048, 00:12:49.418 "data_size": 63488 00:12:49.418 }, 00:12:49.418 { 00:12:49.418 "name": "BaseBdev2", 00:12:49.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.418 "is_configured": false, 00:12:49.418 "data_offset": 0, 00:12:49.418 "data_size": 0 00:12:49.418 }, 00:12:49.418 { 00:12:49.418 "name": "BaseBdev3", 00:12:49.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.418 "is_configured": false, 00:12:49.418 "data_offset": 0, 00:12:49.418 "data_size": 0 00:12:49.418 } 00:12:49.418 ] 00:12:49.418 }' 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.418 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.677 [2024-11-27 14:12:20.620213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.677 BaseBdev2 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.677 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.938 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.938 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.938 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.938 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.938 [ 00:12:49.938 { 00:12:49.938 "name": "BaseBdev2", 00:12:49.938 "aliases": [ 00:12:49.938 "41446a7f-be79-4efc-b8e4-b8ebe60c2ad3" 00:12:49.938 ], 00:12:49.938 "product_name": "Malloc disk", 00:12:49.938 "block_size": 512, 00:12:49.938 "num_blocks": 65536, 00:12:49.938 "uuid": "41446a7f-be79-4efc-b8e4-b8ebe60c2ad3", 00:12:49.938 "assigned_rate_limits": { 00:12:49.938 "rw_ios_per_sec": 0, 00:12:49.938 "rw_mbytes_per_sec": 0, 00:12:49.938 "r_mbytes_per_sec": 0, 00:12:49.938 "w_mbytes_per_sec": 0 00:12:49.938 }, 00:12:49.938 "claimed": true, 00:12:49.938 "claim_type": "exclusive_write", 00:12:49.938 "zoned": false, 00:12:49.938 "supported_io_types": { 00:12:49.938 "read": true, 00:12:49.938 "write": true, 00:12:49.938 "unmap": true, 00:12:49.938 "flush": true, 00:12:49.938 "reset": true, 00:12:49.938 "nvme_admin": false, 00:12:49.938 "nvme_io": false, 00:12:49.938 "nvme_io_md": false, 00:12:49.938 "write_zeroes": true, 00:12:49.938 "zcopy": true, 00:12:49.938 "get_zone_info": false, 00:12:49.938 "zone_management": false, 00:12:49.938 "zone_append": false, 00:12:49.938 "compare": false, 00:12:49.939 "compare_and_write": false, 00:12:49.939 "abort": true, 00:12:49.939 "seek_hole": false, 00:12:49.939 "seek_data": false, 00:12:49.939 "copy": true, 00:12:49.939 "nvme_iov_md": false 00:12:49.939 }, 00:12:49.939 "memory_domains": [ 00:12:49.939 { 00:12:49.939 "dma_device_id": "system", 00:12:49.939 "dma_device_type": 1 00:12:49.939 }, 00:12:49.939 { 00:12:49.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.939 "dma_device_type": 2 00:12:49.939 } 00:12:49.939 ], 00:12:49.939 "driver_specific": {} 00:12:49.939 } 00:12:49.939 ] 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.939 "name": "Existed_Raid", 00:12:49.939 "uuid": "da1bcf38-5a00-48b2-a18f-37e9248b291d", 00:12:49.939 "strip_size_kb": 64, 00:12:49.939 "state": "configuring", 00:12:49.939 "raid_level": "raid0", 00:12:49.939 "superblock": true, 00:12:49.939 "num_base_bdevs": 3, 00:12:49.939 "num_base_bdevs_discovered": 2, 00:12:49.939 "num_base_bdevs_operational": 3, 00:12:49.939 "base_bdevs_list": [ 00:12:49.939 { 00:12:49.939 "name": "BaseBdev1", 00:12:49.939 "uuid": "44b185ba-8b76-419f-b1c7-c8aa3afcde96", 00:12:49.939 "is_configured": true, 00:12:49.939 "data_offset": 2048, 00:12:49.939 "data_size": 63488 00:12:49.939 }, 00:12:49.939 { 00:12:49.939 "name": "BaseBdev2", 00:12:49.939 "uuid": "41446a7f-be79-4efc-b8e4-b8ebe60c2ad3", 00:12:49.939 "is_configured": true, 00:12:49.939 "data_offset": 2048, 00:12:49.939 "data_size": 63488 00:12:49.939 }, 00:12:49.939 { 00:12:49.939 "name": "BaseBdev3", 00:12:49.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.939 "is_configured": false, 00:12:49.939 "data_offset": 0, 00:12:49.939 "data_size": 0 00:12:49.939 } 00:12:49.939 ] 00:12:49.939 }' 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.939 14:12:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.200 [2024-11-27 14:12:21.104867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.200 [2024-11-27 14:12:21.105279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:50.200 [2024-11-27 14:12:21.105309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:50.200 [2024-11-27 14:12:21.105610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:50.200 BaseBdev3 00:12:50.200 [2024-11-27 14:12:21.105783] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:50.200 [2024-11-27 14:12:21.105801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:50.200 [2024-11-27 14:12:21.105955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.200 [ 00:12:50.200 { 00:12:50.200 "name": "BaseBdev3", 00:12:50.200 "aliases": [ 00:12:50.200 "1c15ec6b-00e8-4e8a-9ffc-3730f78ceb56" 00:12:50.200 ], 00:12:50.200 "product_name": "Malloc disk", 00:12:50.200 "block_size": 512, 00:12:50.200 "num_blocks": 65536, 00:12:50.200 "uuid": "1c15ec6b-00e8-4e8a-9ffc-3730f78ceb56", 00:12:50.200 "assigned_rate_limits": { 00:12:50.200 "rw_ios_per_sec": 0, 00:12:50.200 "rw_mbytes_per_sec": 0, 00:12:50.200 "r_mbytes_per_sec": 0, 00:12:50.200 "w_mbytes_per_sec": 0 00:12:50.200 }, 00:12:50.200 "claimed": true, 00:12:50.200 "claim_type": "exclusive_write", 00:12:50.200 "zoned": false, 00:12:50.200 "supported_io_types": { 00:12:50.200 "read": true, 00:12:50.200 "write": true, 00:12:50.200 "unmap": true, 00:12:50.200 "flush": true, 00:12:50.200 "reset": true, 00:12:50.200 "nvme_admin": false, 00:12:50.200 "nvme_io": false, 00:12:50.200 "nvme_io_md": false, 00:12:50.200 "write_zeroes": true, 00:12:50.200 "zcopy": true, 00:12:50.200 "get_zone_info": false, 00:12:50.200 "zone_management": false, 00:12:50.200 "zone_append": false, 00:12:50.200 "compare": false, 00:12:50.200 "compare_and_write": false, 00:12:50.200 "abort": true, 00:12:50.200 "seek_hole": false, 00:12:50.200 "seek_data": false, 00:12:50.200 "copy": true, 00:12:50.200 "nvme_iov_md": false 00:12:50.200 }, 00:12:50.200 "memory_domains": [ 00:12:50.200 { 00:12:50.200 "dma_device_id": "system", 00:12:50.200 "dma_device_type": 1 00:12:50.200 }, 00:12:50.200 { 00:12:50.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.200 "dma_device_type": 2 00:12:50.200 } 00:12:50.200 ], 00:12:50.200 "driver_specific": {} 00:12:50.200 } 00:12:50.200 ] 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.200 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.502 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.502 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.502 "name": "Existed_Raid", 00:12:50.502 "uuid": "da1bcf38-5a00-48b2-a18f-37e9248b291d", 00:12:50.502 "strip_size_kb": 64, 00:12:50.502 "state": "online", 00:12:50.502 "raid_level": "raid0", 00:12:50.502 "superblock": true, 00:12:50.502 "num_base_bdevs": 3, 00:12:50.502 "num_base_bdevs_discovered": 3, 00:12:50.502 "num_base_bdevs_operational": 3, 00:12:50.502 "base_bdevs_list": [ 00:12:50.502 { 00:12:50.502 "name": "BaseBdev1", 00:12:50.502 "uuid": "44b185ba-8b76-419f-b1c7-c8aa3afcde96", 00:12:50.502 "is_configured": true, 00:12:50.502 "data_offset": 2048, 00:12:50.502 "data_size": 63488 00:12:50.502 }, 00:12:50.502 { 00:12:50.502 "name": "BaseBdev2", 00:12:50.502 "uuid": "41446a7f-be79-4efc-b8e4-b8ebe60c2ad3", 00:12:50.502 "is_configured": true, 00:12:50.502 "data_offset": 2048, 00:12:50.502 "data_size": 63488 00:12:50.502 }, 00:12:50.502 { 00:12:50.502 "name": "BaseBdev3", 00:12:50.502 "uuid": "1c15ec6b-00e8-4e8a-9ffc-3730f78ceb56", 00:12:50.502 "is_configured": true, 00:12:50.502 "data_offset": 2048, 00:12:50.502 "data_size": 63488 00:12:50.502 } 00:12:50.502 ] 00:12:50.502 }' 00:12:50.502 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.502 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.762 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:50.762 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:50.762 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:50.762 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:50.762 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:50.762 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:50.762 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:50.762 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.762 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:50.762 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.762 [2024-11-27 14:12:21.544646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.763 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.763 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:50.763 "name": "Existed_Raid", 00:12:50.763 "aliases": [ 00:12:50.763 "da1bcf38-5a00-48b2-a18f-37e9248b291d" 00:12:50.763 ], 00:12:50.763 "product_name": "Raid Volume", 00:12:50.763 "block_size": 512, 00:12:50.763 "num_blocks": 190464, 00:12:50.763 "uuid": "da1bcf38-5a00-48b2-a18f-37e9248b291d", 00:12:50.763 "assigned_rate_limits": { 00:12:50.763 "rw_ios_per_sec": 0, 00:12:50.763 "rw_mbytes_per_sec": 0, 00:12:50.763 "r_mbytes_per_sec": 0, 00:12:50.763 "w_mbytes_per_sec": 0 00:12:50.763 }, 00:12:50.763 "claimed": false, 00:12:50.763 "zoned": false, 00:12:50.763 "supported_io_types": { 00:12:50.763 "read": true, 00:12:50.763 "write": true, 00:12:50.763 "unmap": true, 00:12:50.763 "flush": true, 00:12:50.763 "reset": true, 00:12:50.763 "nvme_admin": false, 00:12:50.763 "nvme_io": false, 00:12:50.763 "nvme_io_md": false, 00:12:50.763 "write_zeroes": true, 00:12:50.763 "zcopy": false, 00:12:50.763 "get_zone_info": false, 00:12:50.763 "zone_management": false, 00:12:50.763 "zone_append": false, 00:12:50.763 "compare": false, 00:12:50.763 "compare_and_write": false, 00:12:50.763 "abort": false, 00:12:50.763 "seek_hole": false, 00:12:50.763 "seek_data": false, 00:12:50.763 "copy": false, 00:12:50.763 "nvme_iov_md": false 00:12:50.763 }, 00:12:50.763 "memory_domains": [ 00:12:50.763 { 00:12:50.763 "dma_device_id": "system", 00:12:50.763 "dma_device_type": 1 00:12:50.763 }, 00:12:50.763 { 00:12:50.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.763 "dma_device_type": 2 00:12:50.763 }, 00:12:50.763 { 00:12:50.763 "dma_device_id": "system", 00:12:50.763 "dma_device_type": 1 00:12:50.763 }, 00:12:50.763 { 00:12:50.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.763 "dma_device_type": 2 00:12:50.763 }, 00:12:50.763 { 00:12:50.763 "dma_device_id": "system", 00:12:50.763 "dma_device_type": 1 00:12:50.763 }, 00:12:50.763 { 00:12:50.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.763 "dma_device_type": 2 00:12:50.763 } 00:12:50.763 ], 00:12:50.763 "driver_specific": { 00:12:50.763 "raid": { 00:12:50.763 "uuid": "da1bcf38-5a00-48b2-a18f-37e9248b291d", 00:12:50.763 "strip_size_kb": 64, 00:12:50.763 "state": "online", 00:12:50.763 "raid_level": "raid0", 00:12:50.763 "superblock": true, 00:12:50.763 "num_base_bdevs": 3, 00:12:50.763 "num_base_bdevs_discovered": 3, 00:12:50.763 "num_base_bdevs_operational": 3, 00:12:50.763 "base_bdevs_list": [ 00:12:50.763 { 00:12:50.763 "name": "BaseBdev1", 00:12:50.763 "uuid": "44b185ba-8b76-419f-b1c7-c8aa3afcde96", 00:12:50.763 "is_configured": true, 00:12:50.763 "data_offset": 2048, 00:12:50.763 "data_size": 63488 00:12:50.763 }, 00:12:50.763 { 00:12:50.763 "name": "BaseBdev2", 00:12:50.763 "uuid": "41446a7f-be79-4efc-b8e4-b8ebe60c2ad3", 00:12:50.763 "is_configured": true, 00:12:50.763 "data_offset": 2048, 00:12:50.763 "data_size": 63488 00:12:50.763 }, 00:12:50.763 { 00:12:50.763 "name": "BaseBdev3", 00:12:50.763 "uuid": "1c15ec6b-00e8-4e8a-9ffc-3730f78ceb56", 00:12:50.763 "is_configured": true, 00:12:50.763 "data_offset": 2048, 00:12:50.763 "data_size": 63488 00:12:50.763 } 00:12:50.763 ] 00:12:50.763 } 00:12:50.763 } 00:12:50.763 }' 00:12:50.763 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:50.763 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:50.763 BaseBdev2 00:12:50.763 BaseBdev3' 00:12:50.763 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.763 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:50.763 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.763 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.763 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:50.763 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.763 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.022 [2024-11-27 14:12:21.851915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.022 [2024-11-27 14:12:21.851950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.022 [2024-11-27 14:12:21.852008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.022 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.281 14:12:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.281 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.281 "name": "Existed_Raid", 00:12:51.281 "uuid": "da1bcf38-5a00-48b2-a18f-37e9248b291d", 00:12:51.281 "strip_size_kb": 64, 00:12:51.281 "state": "offline", 00:12:51.281 "raid_level": "raid0", 00:12:51.281 "superblock": true, 00:12:51.281 "num_base_bdevs": 3, 00:12:51.281 "num_base_bdevs_discovered": 2, 00:12:51.281 "num_base_bdevs_operational": 2, 00:12:51.281 "base_bdevs_list": [ 00:12:51.281 { 00:12:51.281 "name": null, 00:12:51.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.281 "is_configured": false, 00:12:51.281 "data_offset": 0, 00:12:51.281 "data_size": 63488 00:12:51.281 }, 00:12:51.281 { 00:12:51.281 "name": "BaseBdev2", 00:12:51.281 "uuid": "41446a7f-be79-4efc-b8e4-b8ebe60c2ad3", 00:12:51.281 "is_configured": true, 00:12:51.281 "data_offset": 2048, 00:12:51.281 "data_size": 63488 00:12:51.281 }, 00:12:51.281 { 00:12:51.281 "name": "BaseBdev3", 00:12:51.281 "uuid": "1c15ec6b-00e8-4e8a-9ffc-3730f78ceb56", 00:12:51.281 "is_configured": true, 00:12:51.281 "data_offset": 2048, 00:12:51.281 "data_size": 63488 00:12:51.281 } 00:12:51.281 ] 00:12:51.281 }' 00:12:51.281 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.281 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.541 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.541 [2024-11-27 14:12:22.427142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.802 [2024-11-27 14:12:22.592500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:51.802 [2024-11-27 14:12:22.592629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:51.802 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.061 BaseBdev2 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.061 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.061 [ 00:12:52.061 { 00:12:52.061 "name": "BaseBdev2", 00:12:52.061 "aliases": [ 00:12:52.061 "ca27a072-e623-493f-a5c6-7824e7f6630e" 00:12:52.061 ], 00:12:52.061 "product_name": "Malloc disk", 00:12:52.061 "block_size": 512, 00:12:52.061 "num_blocks": 65536, 00:12:52.061 "uuid": "ca27a072-e623-493f-a5c6-7824e7f6630e", 00:12:52.061 "assigned_rate_limits": { 00:12:52.061 "rw_ios_per_sec": 0, 00:12:52.061 "rw_mbytes_per_sec": 0, 00:12:52.061 "r_mbytes_per_sec": 0, 00:12:52.061 "w_mbytes_per_sec": 0 00:12:52.061 }, 00:12:52.061 "claimed": false, 00:12:52.061 "zoned": false, 00:12:52.061 "supported_io_types": { 00:12:52.061 "read": true, 00:12:52.061 "write": true, 00:12:52.061 "unmap": true, 00:12:52.061 "flush": true, 00:12:52.061 "reset": true, 00:12:52.061 "nvme_admin": false, 00:12:52.061 "nvme_io": false, 00:12:52.061 "nvme_io_md": false, 00:12:52.061 "write_zeroes": true, 00:12:52.062 "zcopy": true, 00:12:52.062 "get_zone_info": false, 00:12:52.062 "zone_management": false, 00:12:52.062 "zone_append": false, 00:12:52.062 "compare": false, 00:12:52.062 "compare_and_write": false, 00:12:52.062 "abort": true, 00:12:52.062 "seek_hole": false, 00:12:52.062 "seek_data": false, 00:12:52.062 "copy": true, 00:12:52.062 "nvme_iov_md": false 00:12:52.062 }, 00:12:52.062 "memory_domains": [ 00:12:52.062 { 00:12:52.062 "dma_device_id": "system", 00:12:52.062 "dma_device_type": 1 00:12:52.062 }, 00:12:52.062 { 00:12:52.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.062 "dma_device_type": 2 00:12:52.062 } 00:12:52.062 ], 00:12:52.062 "driver_specific": {} 00:12:52.062 } 00:12:52.062 ] 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.062 BaseBdev3 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.062 [ 00:12:52.062 { 00:12:52.062 "name": "BaseBdev3", 00:12:52.062 "aliases": [ 00:12:52.062 "555e2fb5-ec23-4d6d-b248-f95ad6cd473c" 00:12:52.062 ], 00:12:52.062 "product_name": "Malloc disk", 00:12:52.062 "block_size": 512, 00:12:52.062 "num_blocks": 65536, 00:12:52.062 "uuid": "555e2fb5-ec23-4d6d-b248-f95ad6cd473c", 00:12:52.062 "assigned_rate_limits": { 00:12:52.062 "rw_ios_per_sec": 0, 00:12:52.062 "rw_mbytes_per_sec": 0, 00:12:52.062 "r_mbytes_per_sec": 0, 00:12:52.062 "w_mbytes_per_sec": 0 00:12:52.062 }, 00:12:52.062 "claimed": false, 00:12:52.062 "zoned": false, 00:12:52.062 "supported_io_types": { 00:12:52.062 "read": true, 00:12:52.062 "write": true, 00:12:52.062 "unmap": true, 00:12:52.062 "flush": true, 00:12:52.062 "reset": true, 00:12:52.062 "nvme_admin": false, 00:12:52.062 "nvme_io": false, 00:12:52.062 "nvme_io_md": false, 00:12:52.062 "write_zeroes": true, 00:12:52.062 "zcopy": true, 00:12:52.062 "get_zone_info": false, 00:12:52.062 "zone_management": false, 00:12:52.062 "zone_append": false, 00:12:52.062 "compare": false, 00:12:52.062 "compare_and_write": false, 00:12:52.062 "abort": true, 00:12:52.062 "seek_hole": false, 00:12:52.062 "seek_data": false, 00:12:52.062 "copy": true, 00:12:52.062 "nvme_iov_md": false 00:12:52.062 }, 00:12:52.062 "memory_domains": [ 00:12:52.062 { 00:12:52.062 "dma_device_id": "system", 00:12:52.062 "dma_device_type": 1 00:12:52.062 }, 00:12:52.062 { 00:12:52.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.062 "dma_device_type": 2 00:12:52.062 } 00:12:52.062 ], 00:12:52.062 "driver_specific": {} 00:12:52.062 } 00:12:52.062 ] 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.062 [2024-11-27 14:12:22.931729] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.062 [2024-11-27 14:12:22.931839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.062 [2024-11-27 14:12:22.931932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.062 [2024-11-27 14:12:22.934299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.062 "name": "Existed_Raid", 00:12:52.062 "uuid": "36a95e91-f11e-4214-92e2-51d7b41a7d36", 00:12:52.062 "strip_size_kb": 64, 00:12:52.062 "state": "configuring", 00:12:52.062 "raid_level": "raid0", 00:12:52.062 "superblock": true, 00:12:52.062 "num_base_bdevs": 3, 00:12:52.062 "num_base_bdevs_discovered": 2, 00:12:52.062 "num_base_bdevs_operational": 3, 00:12:52.062 "base_bdevs_list": [ 00:12:52.062 { 00:12:52.062 "name": "BaseBdev1", 00:12:52.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.062 "is_configured": false, 00:12:52.062 "data_offset": 0, 00:12:52.062 "data_size": 0 00:12:52.062 }, 00:12:52.062 { 00:12:52.062 "name": "BaseBdev2", 00:12:52.062 "uuid": "ca27a072-e623-493f-a5c6-7824e7f6630e", 00:12:52.062 "is_configured": true, 00:12:52.062 "data_offset": 2048, 00:12:52.062 "data_size": 63488 00:12:52.062 }, 00:12:52.062 { 00:12:52.062 "name": "BaseBdev3", 00:12:52.062 "uuid": "555e2fb5-ec23-4d6d-b248-f95ad6cd473c", 00:12:52.062 "is_configured": true, 00:12:52.062 "data_offset": 2048, 00:12:52.062 "data_size": 63488 00:12:52.062 } 00:12:52.062 ] 00:12:52.062 }' 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.062 14:12:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.628 [2024-11-27 14:12:23.398945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.628 "name": "Existed_Raid", 00:12:52.628 "uuid": "36a95e91-f11e-4214-92e2-51d7b41a7d36", 00:12:52.628 "strip_size_kb": 64, 00:12:52.628 "state": "configuring", 00:12:52.628 "raid_level": "raid0", 00:12:52.628 "superblock": true, 00:12:52.628 "num_base_bdevs": 3, 00:12:52.628 "num_base_bdevs_discovered": 1, 00:12:52.628 "num_base_bdevs_operational": 3, 00:12:52.628 "base_bdevs_list": [ 00:12:52.628 { 00:12:52.628 "name": "BaseBdev1", 00:12:52.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.628 "is_configured": false, 00:12:52.628 "data_offset": 0, 00:12:52.628 "data_size": 0 00:12:52.628 }, 00:12:52.628 { 00:12:52.628 "name": null, 00:12:52.628 "uuid": "ca27a072-e623-493f-a5c6-7824e7f6630e", 00:12:52.628 "is_configured": false, 00:12:52.628 "data_offset": 0, 00:12:52.628 "data_size": 63488 00:12:52.628 }, 00:12:52.628 { 00:12:52.628 "name": "BaseBdev3", 00:12:52.628 "uuid": "555e2fb5-ec23-4d6d-b248-f95ad6cd473c", 00:12:52.628 "is_configured": true, 00:12:52.628 "data_offset": 2048, 00:12:52.628 "data_size": 63488 00:12:52.628 } 00:12:52.628 ] 00:12:52.628 }' 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.628 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.887 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:52.887 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.887 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.887 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.145 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.146 [2024-11-27 14:12:23.901756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.146 BaseBdev1 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.146 [ 00:12:53.146 { 00:12:53.146 "name": "BaseBdev1", 00:12:53.146 "aliases": [ 00:12:53.146 "1442bdd5-c517-4477-88a9-f0508b5fde63" 00:12:53.146 ], 00:12:53.146 "product_name": "Malloc disk", 00:12:53.146 "block_size": 512, 00:12:53.146 "num_blocks": 65536, 00:12:53.146 "uuid": "1442bdd5-c517-4477-88a9-f0508b5fde63", 00:12:53.146 "assigned_rate_limits": { 00:12:53.146 "rw_ios_per_sec": 0, 00:12:53.146 "rw_mbytes_per_sec": 0, 00:12:53.146 "r_mbytes_per_sec": 0, 00:12:53.146 "w_mbytes_per_sec": 0 00:12:53.146 }, 00:12:53.146 "claimed": true, 00:12:53.146 "claim_type": "exclusive_write", 00:12:53.146 "zoned": false, 00:12:53.146 "supported_io_types": { 00:12:53.146 "read": true, 00:12:53.146 "write": true, 00:12:53.146 "unmap": true, 00:12:53.146 "flush": true, 00:12:53.146 "reset": true, 00:12:53.146 "nvme_admin": false, 00:12:53.146 "nvme_io": false, 00:12:53.146 "nvme_io_md": false, 00:12:53.146 "write_zeroes": true, 00:12:53.146 "zcopy": true, 00:12:53.146 "get_zone_info": false, 00:12:53.146 "zone_management": false, 00:12:53.146 "zone_append": false, 00:12:53.146 "compare": false, 00:12:53.146 "compare_and_write": false, 00:12:53.146 "abort": true, 00:12:53.146 "seek_hole": false, 00:12:53.146 "seek_data": false, 00:12:53.146 "copy": true, 00:12:53.146 "nvme_iov_md": false 00:12:53.146 }, 00:12:53.146 "memory_domains": [ 00:12:53.146 { 00:12:53.146 "dma_device_id": "system", 00:12:53.146 "dma_device_type": 1 00:12:53.146 }, 00:12:53.146 { 00:12:53.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.146 "dma_device_type": 2 00:12:53.146 } 00:12:53.146 ], 00:12:53.146 "driver_specific": {} 00:12:53.146 } 00:12:53.146 ] 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.146 "name": "Existed_Raid", 00:12:53.146 "uuid": "36a95e91-f11e-4214-92e2-51d7b41a7d36", 00:12:53.146 "strip_size_kb": 64, 00:12:53.146 "state": "configuring", 00:12:53.146 "raid_level": "raid0", 00:12:53.146 "superblock": true, 00:12:53.146 "num_base_bdevs": 3, 00:12:53.146 "num_base_bdevs_discovered": 2, 00:12:53.146 "num_base_bdevs_operational": 3, 00:12:53.146 "base_bdevs_list": [ 00:12:53.146 { 00:12:53.146 "name": "BaseBdev1", 00:12:53.146 "uuid": "1442bdd5-c517-4477-88a9-f0508b5fde63", 00:12:53.146 "is_configured": true, 00:12:53.146 "data_offset": 2048, 00:12:53.146 "data_size": 63488 00:12:53.146 }, 00:12:53.146 { 00:12:53.146 "name": null, 00:12:53.146 "uuid": "ca27a072-e623-493f-a5c6-7824e7f6630e", 00:12:53.146 "is_configured": false, 00:12:53.146 "data_offset": 0, 00:12:53.146 "data_size": 63488 00:12:53.146 }, 00:12:53.146 { 00:12:53.146 "name": "BaseBdev3", 00:12:53.146 "uuid": "555e2fb5-ec23-4d6d-b248-f95ad6cd473c", 00:12:53.146 "is_configured": true, 00:12:53.146 "data_offset": 2048, 00:12:53.146 "data_size": 63488 00:12:53.146 } 00:12:53.146 ] 00:12:53.146 }' 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.146 14:12:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.404 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.404 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.404 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.404 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:53.662 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.663 [2024-11-27 14:12:24.404995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.663 "name": "Existed_Raid", 00:12:53.663 "uuid": "36a95e91-f11e-4214-92e2-51d7b41a7d36", 00:12:53.663 "strip_size_kb": 64, 00:12:53.663 "state": "configuring", 00:12:53.663 "raid_level": "raid0", 00:12:53.663 "superblock": true, 00:12:53.663 "num_base_bdevs": 3, 00:12:53.663 "num_base_bdevs_discovered": 1, 00:12:53.663 "num_base_bdevs_operational": 3, 00:12:53.663 "base_bdevs_list": [ 00:12:53.663 { 00:12:53.663 "name": "BaseBdev1", 00:12:53.663 "uuid": "1442bdd5-c517-4477-88a9-f0508b5fde63", 00:12:53.663 "is_configured": true, 00:12:53.663 "data_offset": 2048, 00:12:53.663 "data_size": 63488 00:12:53.663 }, 00:12:53.663 { 00:12:53.663 "name": null, 00:12:53.663 "uuid": "ca27a072-e623-493f-a5c6-7824e7f6630e", 00:12:53.663 "is_configured": false, 00:12:53.663 "data_offset": 0, 00:12:53.663 "data_size": 63488 00:12:53.663 }, 00:12:53.663 { 00:12:53.663 "name": null, 00:12:53.663 "uuid": "555e2fb5-ec23-4d6d-b248-f95ad6cd473c", 00:12:53.663 "is_configured": false, 00:12:53.663 "data_offset": 0, 00:12:53.663 "data_size": 63488 00:12:53.663 } 00:12:53.663 ] 00:12:53.663 }' 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.663 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.230 [2024-11-27 14:12:24.924306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.230 "name": "Existed_Raid", 00:12:54.230 "uuid": "36a95e91-f11e-4214-92e2-51d7b41a7d36", 00:12:54.230 "strip_size_kb": 64, 00:12:54.230 "state": "configuring", 00:12:54.230 "raid_level": "raid0", 00:12:54.230 "superblock": true, 00:12:54.230 "num_base_bdevs": 3, 00:12:54.230 "num_base_bdevs_discovered": 2, 00:12:54.230 "num_base_bdevs_operational": 3, 00:12:54.230 "base_bdevs_list": [ 00:12:54.230 { 00:12:54.230 "name": "BaseBdev1", 00:12:54.230 "uuid": "1442bdd5-c517-4477-88a9-f0508b5fde63", 00:12:54.230 "is_configured": true, 00:12:54.230 "data_offset": 2048, 00:12:54.230 "data_size": 63488 00:12:54.230 }, 00:12:54.230 { 00:12:54.230 "name": null, 00:12:54.230 "uuid": "ca27a072-e623-493f-a5c6-7824e7f6630e", 00:12:54.230 "is_configured": false, 00:12:54.230 "data_offset": 0, 00:12:54.230 "data_size": 63488 00:12:54.230 }, 00:12:54.230 { 00:12:54.230 "name": "BaseBdev3", 00:12:54.230 "uuid": "555e2fb5-ec23-4d6d-b248-f95ad6cd473c", 00:12:54.230 "is_configured": true, 00:12:54.230 "data_offset": 2048, 00:12:54.230 "data_size": 63488 00:12:54.230 } 00:12:54.230 ] 00:12:54.230 }' 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.230 14:12:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.542 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:54.542 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.542 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.542 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.542 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.542 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:54.542 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:54.542 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.542 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.542 [2024-11-27 14:12:25.395729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.828 "name": "Existed_Raid", 00:12:54.828 "uuid": "36a95e91-f11e-4214-92e2-51d7b41a7d36", 00:12:54.828 "strip_size_kb": 64, 00:12:54.828 "state": "configuring", 00:12:54.828 "raid_level": "raid0", 00:12:54.828 "superblock": true, 00:12:54.828 "num_base_bdevs": 3, 00:12:54.828 "num_base_bdevs_discovered": 1, 00:12:54.828 "num_base_bdevs_operational": 3, 00:12:54.828 "base_bdevs_list": [ 00:12:54.828 { 00:12:54.828 "name": null, 00:12:54.828 "uuid": "1442bdd5-c517-4477-88a9-f0508b5fde63", 00:12:54.828 "is_configured": false, 00:12:54.828 "data_offset": 0, 00:12:54.828 "data_size": 63488 00:12:54.828 }, 00:12:54.828 { 00:12:54.828 "name": null, 00:12:54.828 "uuid": "ca27a072-e623-493f-a5c6-7824e7f6630e", 00:12:54.828 "is_configured": false, 00:12:54.828 "data_offset": 0, 00:12:54.828 "data_size": 63488 00:12:54.828 }, 00:12:54.828 { 00:12:54.828 "name": "BaseBdev3", 00:12:54.828 "uuid": "555e2fb5-ec23-4d6d-b248-f95ad6cd473c", 00:12:54.828 "is_configured": true, 00:12:54.828 "data_offset": 2048, 00:12:54.828 "data_size": 63488 00:12:54.828 } 00:12:54.828 ] 00:12:54.828 }' 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.828 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.088 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.088 14:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:55.088 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.088 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.088 14:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.088 [2024-11-27 14:12:26.009238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.088 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.359 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.359 "name": "Existed_Raid", 00:12:55.359 "uuid": "36a95e91-f11e-4214-92e2-51d7b41a7d36", 00:12:55.359 "strip_size_kb": 64, 00:12:55.359 "state": "configuring", 00:12:55.359 "raid_level": "raid0", 00:12:55.359 "superblock": true, 00:12:55.359 "num_base_bdevs": 3, 00:12:55.359 "num_base_bdevs_discovered": 2, 00:12:55.359 "num_base_bdevs_operational": 3, 00:12:55.359 "base_bdevs_list": [ 00:12:55.359 { 00:12:55.359 "name": null, 00:12:55.359 "uuid": "1442bdd5-c517-4477-88a9-f0508b5fde63", 00:12:55.359 "is_configured": false, 00:12:55.359 "data_offset": 0, 00:12:55.359 "data_size": 63488 00:12:55.359 }, 00:12:55.359 { 00:12:55.359 "name": "BaseBdev2", 00:12:55.359 "uuid": "ca27a072-e623-493f-a5c6-7824e7f6630e", 00:12:55.359 "is_configured": true, 00:12:55.359 "data_offset": 2048, 00:12:55.359 "data_size": 63488 00:12:55.359 }, 00:12:55.359 { 00:12:55.359 "name": "BaseBdev3", 00:12:55.359 "uuid": "555e2fb5-ec23-4d6d-b248-f95ad6cd473c", 00:12:55.359 "is_configured": true, 00:12:55.359 "data_offset": 2048, 00:12:55.359 "data_size": 63488 00:12:55.359 } 00:12:55.359 ] 00:12:55.359 }' 00:12:55.359 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.359 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1442bdd5-c517-4477-88a9-f0508b5fde63 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.618 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.877 [2024-11-27 14:12:26.604382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:55.877 [2024-11-27 14:12:26.604784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:55.877 [2024-11-27 14:12:26.604851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:55.877 [2024-11-27 14:12:26.605182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:55.877 NewBaseBdev 00:12:55.877 [2024-11-27 14:12:26.605421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:55.877 [2024-11-27 14:12:26.605467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:55.877 [2024-11-27 14:12:26.605669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.877 [ 00:12:55.877 { 00:12:55.877 "name": "NewBaseBdev", 00:12:55.877 "aliases": [ 00:12:55.877 "1442bdd5-c517-4477-88a9-f0508b5fde63" 00:12:55.877 ], 00:12:55.877 "product_name": "Malloc disk", 00:12:55.877 "block_size": 512, 00:12:55.877 "num_blocks": 65536, 00:12:55.877 "uuid": "1442bdd5-c517-4477-88a9-f0508b5fde63", 00:12:55.877 "assigned_rate_limits": { 00:12:55.877 "rw_ios_per_sec": 0, 00:12:55.877 "rw_mbytes_per_sec": 0, 00:12:55.877 "r_mbytes_per_sec": 0, 00:12:55.877 "w_mbytes_per_sec": 0 00:12:55.877 }, 00:12:55.877 "claimed": true, 00:12:55.877 "claim_type": "exclusive_write", 00:12:55.877 "zoned": false, 00:12:55.877 "supported_io_types": { 00:12:55.877 "read": true, 00:12:55.877 "write": true, 00:12:55.877 "unmap": true, 00:12:55.877 "flush": true, 00:12:55.877 "reset": true, 00:12:55.877 "nvme_admin": false, 00:12:55.877 "nvme_io": false, 00:12:55.877 "nvme_io_md": false, 00:12:55.877 "write_zeroes": true, 00:12:55.877 "zcopy": true, 00:12:55.877 "get_zone_info": false, 00:12:55.877 "zone_management": false, 00:12:55.877 "zone_append": false, 00:12:55.877 "compare": false, 00:12:55.877 "compare_and_write": false, 00:12:55.877 "abort": true, 00:12:55.877 "seek_hole": false, 00:12:55.877 "seek_data": false, 00:12:55.877 "copy": true, 00:12:55.877 "nvme_iov_md": false 00:12:55.877 }, 00:12:55.877 "memory_domains": [ 00:12:55.877 { 00:12:55.877 "dma_device_id": "system", 00:12:55.877 "dma_device_type": 1 00:12:55.877 }, 00:12:55.877 { 00:12:55.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.877 "dma_device_type": 2 00:12:55.877 } 00:12:55.877 ], 00:12:55.877 "driver_specific": {} 00:12:55.877 } 00:12:55.877 ] 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.877 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.878 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.878 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.878 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.878 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.878 "name": "Existed_Raid", 00:12:55.878 "uuid": "36a95e91-f11e-4214-92e2-51d7b41a7d36", 00:12:55.878 "strip_size_kb": 64, 00:12:55.878 "state": "online", 00:12:55.878 "raid_level": "raid0", 00:12:55.878 "superblock": true, 00:12:55.878 "num_base_bdevs": 3, 00:12:55.878 "num_base_bdevs_discovered": 3, 00:12:55.878 "num_base_bdevs_operational": 3, 00:12:55.878 "base_bdevs_list": [ 00:12:55.878 { 00:12:55.878 "name": "NewBaseBdev", 00:12:55.878 "uuid": "1442bdd5-c517-4477-88a9-f0508b5fde63", 00:12:55.878 "is_configured": true, 00:12:55.878 "data_offset": 2048, 00:12:55.878 "data_size": 63488 00:12:55.878 }, 00:12:55.878 { 00:12:55.878 "name": "BaseBdev2", 00:12:55.878 "uuid": "ca27a072-e623-493f-a5c6-7824e7f6630e", 00:12:55.878 "is_configured": true, 00:12:55.878 "data_offset": 2048, 00:12:55.878 "data_size": 63488 00:12:55.878 }, 00:12:55.878 { 00:12:55.878 "name": "BaseBdev3", 00:12:55.878 "uuid": "555e2fb5-ec23-4d6d-b248-f95ad6cd473c", 00:12:55.878 "is_configured": true, 00:12:55.878 "data_offset": 2048, 00:12:55.878 "data_size": 63488 00:12:55.878 } 00:12:55.878 ] 00:12:55.878 }' 00:12:55.878 14:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.878 14:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.446 [2024-11-27 14:12:27.131874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:56.446 "name": "Existed_Raid", 00:12:56.446 "aliases": [ 00:12:56.446 "36a95e91-f11e-4214-92e2-51d7b41a7d36" 00:12:56.446 ], 00:12:56.446 "product_name": "Raid Volume", 00:12:56.446 "block_size": 512, 00:12:56.446 "num_blocks": 190464, 00:12:56.446 "uuid": "36a95e91-f11e-4214-92e2-51d7b41a7d36", 00:12:56.446 "assigned_rate_limits": { 00:12:56.446 "rw_ios_per_sec": 0, 00:12:56.446 "rw_mbytes_per_sec": 0, 00:12:56.446 "r_mbytes_per_sec": 0, 00:12:56.446 "w_mbytes_per_sec": 0 00:12:56.446 }, 00:12:56.446 "claimed": false, 00:12:56.446 "zoned": false, 00:12:56.446 "supported_io_types": { 00:12:56.446 "read": true, 00:12:56.446 "write": true, 00:12:56.446 "unmap": true, 00:12:56.446 "flush": true, 00:12:56.446 "reset": true, 00:12:56.446 "nvme_admin": false, 00:12:56.446 "nvme_io": false, 00:12:56.446 "nvme_io_md": false, 00:12:56.446 "write_zeroes": true, 00:12:56.446 "zcopy": false, 00:12:56.446 "get_zone_info": false, 00:12:56.446 "zone_management": false, 00:12:56.446 "zone_append": false, 00:12:56.446 "compare": false, 00:12:56.446 "compare_and_write": false, 00:12:56.446 "abort": false, 00:12:56.446 "seek_hole": false, 00:12:56.446 "seek_data": false, 00:12:56.446 "copy": false, 00:12:56.446 "nvme_iov_md": false 00:12:56.446 }, 00:12:56.446 "memory_domains": [ 00:12:56.446 { 00:12:56.446 "dma_device_id": "system", 00:12:56.446 "dma_device_type": 1 00:12:56.446 }, 00:12:56.446 { 00:12:56.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.446 "dma_device_type": 2 00:12:56.446 }, 00:12:56.446 { 00:12:56.446 "dma_device_id": "system", 00:12:56.446 "dma_device_type": 1 00:12:56.446 }, 00:12:56.446 { 00:12:56.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.446 "dma_device_type": 2 00:12:56.446 }, 00:12:56.446 { 00:12:56.446 "dma_device_id": "system", 00:12:56.446 "dma_device_type": 1 00:12:56.446 }, 00:12:56.446 { 00:12:56.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.446 "dma_device_type": 2 00:12:56.446 } 00:12:56.446 ], 00:12:56.446 "driver_specific": { 00:12:56.446 "raid": { 00:12:56.446 "uuid": "36a95e91-f11e-4214-92e2-51d7b41a7d36", 00:12:56.446 "strip_size_kb": 64, 00:12:56.446 "state": "online", 00:12:56.446 "raid_level": "raid0", 00:12:56.446 "superblock": true, 00:12:56.446 "num_base_bdevs": 3, 00:12:56.446 "num_base_bdevs_discovered": 3, 00:12:56.446 "num_base_bdevs_operational": 3, 00:12:56.446 "base_bdevs_list": [ 00:12:56.446 { 00:12:56.446 "name": "NewBaseBdev", 00:12:56.446 "uuid": "1442bdd5-c517-4477-88a9-f0508b5fde63", 00:12:56.446 "is_configured": true, 00:12:56.446 "data_offset": 2048, 00:12:56.446 "data_size": 63488 00:12:56.446 }, 00:12:56.446 { 00:12:56.446 "name": "BaseBdev2", 00:12:56.446 "uuid": "ca27a072-e623-493f-a5c6-7824e7f6630e", 00:12:56.446 "is_configured": true, 00:12:56.446 "data_offset": 2048, 00:12:56.446 "data_size": 63488 00:12:56.446 }, 00:12:56.446 { 00:12:56.446 "name": "BaseBdev3", 00:12:56.446 "uuid": "555e2fb5-ec23-4d6d-b248-f95ad6cd473c", 00:12:56.446 "is_configured": true, 00:12:56.446 "data_offset": 2048, 00:12:56.446 "data_size": 63488 00:12:56.446 } 00:12:56.446 ] 00:12:56.446 } 00:12:56.446 } 00:12:56.446 }' 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:56.446 BaseBdev2 00:12:56.446 BaseBdev3' 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.446 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.447 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.706 [2024-11-27 14:12:27.411122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.706 [2024-11-27 14:12:27.411164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.706 [2024-11-27 14:12:27.411260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.706 [2024-11-27 14:12:27.411318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.706 [2024-11-27 14:12:27.411331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64635 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64635 ']' 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64635 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64635 00:12:56.706 killing process with pid 64635 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64635' 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64635 00:12:56.706 [2024-11-27 14:12:27.449939] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.706 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64635 00:12:56.965 [2024-11-27 14:12:27.782377] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.342 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:58.342 00:12:58.342 real 0m10.967s 00:12:58.342 user 0m17.373s 00:12:58.342 sys 0m1.840s 00:12:58.342 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.342 ************************************ 00:12:58.342 END TEST raid_state_function_test_sb 00:12:58.342 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.342 ************************************ 00:12:58.342 14:12:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:58.342 14:12:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:58.342 14:12:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.342 14:12:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.342 ************************************ 00:12:58.342 START TEST raid_superblock_test 00:12:58.342 ************************************ 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65261 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65261 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65261 ']' 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.342 14:12:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.342 [2024-11-27 14:12:29.184964] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:12:58.343 [2024-11-27 14:12:29.185198] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65261 ] 00:12:58.599 [2024-11-27 14:12:29.362964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.599 [2024-11-27 14:12:29.486939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.857 [2024-11-27 14:12:29.701203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.857 [2024-11-27 14:12:29.701304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.426 malloc1 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.426 [2024-11-27 14:12:30.150913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:59.426 [2024-11-27 14:12:30.151039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.426 [2024-11-27 14:12:30.151085] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:59.426 [2024-11-27 14:12:30.151127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.426 [2024-11-27 14:12:30.153503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.426 [2024-11-27 14:12:30.153581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:59.426 pt1 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.426 malloc2 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.426 [2024-11-27 14:12:30.213665] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:59.426 [2024-11-27 14:12:30.213796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.426 [2024-11-27 14:12:30.213833] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:59.426 [2024-11-27 14:12:30.213844] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.426 [2024-11-27 14:12:30.216234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.426 [2024-11-27 14:12:30.216274] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:59.426 pt2 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.426 malloc3 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.426 [2024-11-27 14:12:30.288507] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:59.426 [2024-11-27 14:12:30.288627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.426 [2024-11-27 14:12:30.288679] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:59.426 [2024-11-27 14:12:30.288722] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.426 [2024-11-27 14:12:30.291273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.426 [2024-11-27 14:12:30.291347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:59.426 pt3 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.426 [2024-11-27 14:12:30.300530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:59.426 [2024-11-27 14:12:30.302491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:59.426 [2024-11-27 14:12:30.302598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:59.426 [2024-11-27 14:12:30.302809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:59.426 [2024-11-27 14:12:30.302866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:59.426 [2024-11-27 14:12:30.303181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:59.426 [2024-11-27 14:12:30.303402] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:59.426 [2024-11-27 14:12:30.303446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:59.426 [2024-11-27 14:12:30.303648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.426 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.426 "name": "raid_bdev1", 00:12:59.426 "uuid": "33ef0b3d-e97f-45cb-8065-18117015e0e7", 00:12:59.426 "strip_size_kb": 64, 00:12:59.426 "state": "online", 00:12:59.426 "raid_level": "raid0", 00:12:59.426 "superblock": true, 00:12:59.426 "num_base_bdevs": 3, 00:12:59.426 "num_base_bdevs_discovered": 3, 00:12:59.426 "num_base_bdevs_operational": 3, 00:12:59.426 "base_bdevs_list": [ 00:12:59.426 { 00:12:59.426 "name": "pt1", 00:12:59.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.426 "is_configured": true, 00:12:59.426 "data_offset": 2048, 00:12:59.426 "data_size": 63488 00:12:59.426 }, 00:12:59.426 { 00:12:59.426 "name": "pt2", 00:12:59.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.426 "is_configured": true, 00:12:59.426 "data_offset": 2048, 00:12:59.426 "data_size": 63488 00:12:59.426 }, 00:12:59.427 { 00:12:59.427 "name": "pt3", 00:12:59.427 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.427 "is_configured": true, 00:12:59.427 "data_offset": 2048, 00:12:59.427 "data_size": 63488 00:12:59.427 } 00:12:59.427 ] 00:12:59.427 }' 00:12:59.427 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.427 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.993 [2024-11-27 14:12:30.720157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.993 "name": "raid_bdev1", 00:12:59.993 "aliases": [ 00:12:59.993 "33ef0b3d-e97f-45cb-8065-18117015e0e7" 00:12:59.993 ], 00:12:59.993 "product_name": "Raid Volume", 00:12:59.993 "block_size": 512, 00:12:59.993 "num_blocks": 190464, 00:12:59.993 "uuid": "33ef0b3d-e97f-45cb-8065-18117015e0e7", 00:12:59.993 "assigned_rate_limits": { 00:12:59.993 "rw_ios_per_sec": 0, 00:12:59.993 "rw_mbytes_per_sec": 0, 00:12:59.993 "r_mbytes_per_sec": 0, 00:12:59.993 "w_mbytes_per_sec": 0 00:12:59.993 }, 00:12:59.993 "claimed": false, 00:12:59.993 "zoned": false, 00:12:59.993 "supported_io_types": { 00:12:59.993 "read": true, 00:12:59.993 "write": true, 00:12:59.993 "unmap": true, 00:12:59.993 "flush": true, 00:12:59.993 "reset": true, 00:12:59.993 "nvme_admin": false, 00:12:59.993 "nvme_io": false, 00:12:59.993 "nvme_io_md": false, 00:12:59.993 "write_zeroes": true, 00:12:59.993 "zcopy": false, 00:12:59.993 "get_zone_info": false, 00:12:59.993 "zone_management": false, 00:12:59.993 "zone_append": false, 00:12:59.993 "compare": false, 00:12:59.993 "compare_and_write": false, 00:12:59.993 "abort": false, 00:12:59.993 "seek_hole": false, 00:12:59.993 "seek_data": false, 00:12:59.993 "copy": false, 00:12:59.993 "nvme_iov_md": false 00:12:59.993 }, 00:12:59.993 "memory_domains": [ 00:12:59.993 { 00:12:59.993 "dma_device_id": "system", 00:12:59.993 "dma_device_type": 1 00:12:59.993 }, 00:12:59.993 { 00:12:59.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.993 "dma_device_type": 2 00:12:59.993 }, 00:12:59.993 { 00:12:59.993 "dma_device_id": "system", 00:12:59.993 "dma_device_type": 1 00:12:59.993 }, 00:12:59.993 { 00:12:59.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.993 "dma_device_type": 2 00:12:59.993 }, 00:12:59.993 { 00:12:59.993 "dma_device_id": "system", 00:12:59.993 "dma_device_type": 1 00:12:59.993 }, 00:12:59.993 { 00:12:59.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.993 "dma_device_type": 2 00:12:59.993 } 00:12:59.993 ], 00:12:59.993 "driver_specific": { 00:12:59.993 "raid": { 00:12:59.993 "uuid": "33ef0b3d-e97f-45cb-8065-18117015e0e7", 00:12:59.993 "strip_size_kb": 64, 00:12:59.993 "state": "online", 00:12:59.993 "raid_level": "raid0", 00:12:59.993 "superblock": true, 00:12:59.993 "num_base_bdevs": 3, 00:12:59.993 "num_base_bdevs_discovered": 3, 00:12:59.993 "num_base_bdevs_operational": 3, 00:12:59.993 "base_bdevs_list": [ 00:12:59.993 { 00:12:59.993 "name": "pt1", 00:12:59.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.993 "is_configured": true, 00:12:59.993 "data_offset": 2048, 00:12:59.993 "data_size": 63488 00:12:59.993 }, 00:12:59.993 { 00:12:59.993 "name": "pt2", 00:12:59.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.993 "is_configured": true, 00:12:59.993 "data_offset": 2048, 00:12:59.993 "data_size": 63488 00:12:59.993 }, 00:12:59.993 { 00:12:59.993 "name": "pt3", 00:12:59.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.993 "is_configured": true, 00:12:59.993 "data_offset": 2048, 00:12:59.993 "data_size": 63488 00:12:59.993 } 00:12:59.993 ] 00:12:59.993 } 00:12:59.993 } 00:12:59.993 }' 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:59.993 pt2 00:12:59.993 pt3' 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.993 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.253 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.253 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.253 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.253 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:00.253 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.253 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.253 14:12:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.253 14:12:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:00.253 [2024-11-27 14:12:31.011628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=33ef0b3d-e97f-45cb-8065-18117015e0e7 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 33ef0b3d-e97f-45cb-8065-18117015e0e7 ']' 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.253 [2024-11-27 14:12:31.059244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.253 [2024-11-27 14:12:31.059278] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.253 [2024-11-27 14:12:31.059378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.253 [2024-11-27 14:12:31.059451] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.253 [2024-11-27 14:12:31.059463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.253 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.512 [2024-11-27 14:12:31.215031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:00.512 [2024-11-27 14:12:31.217092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:00.512 [2024-11-27 14:12:31.217167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:00.512 [2024-11-27 14:12:31.217239] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:00.512 [2024-11-27 14:12:31.217296] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:00.512 [2024-11-27 14:12:31.217320] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:00.512 [2024-11-27 14:12:31.217349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.512 [2024-11-27 14:12:31.217378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:00.512 request: 00:13:00.512 { 00:13:00.512 "name": "raid_bdev1", 00:13:00.512 "raid_level": "raid0", 00:13:00.512 "base_bdevs": [ 00:13:00.512 "malloc1", 00:13:00.512 "malloc2", 00:13:00.512 "malloc3" 00:13:00.512 ], 00:13:00.512 "strip_size_kb": 64, 00:13:00.512 "superblock": false, 00:13:00.512 "method": "bdev_raid_create", 00:13:00.512 "req_id": 1 00:13:00.512 } 00:13:00.512 Got JSON-RPC error response 00:13:00.512 response: 00:13:00.512 { 00:13:00.512 "code": -17, 00:13:00.512 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:00.512 } 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.512 [2024-11-27 14:12:31.282859] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:00.512 [2024-11-27 14:12:31.282980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.512 [2024-11-27 14:12:31.283034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:00.512 [2024-11-27 14:12:31.283070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.512 [2024-11-27 14:12:31.285654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.512 [2024-11-27 14:12:31.285739] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:00.512 [2024-11-27 14:12:31.285876] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:00.512 [2024-11-27 14:12:31.285970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:00.512 pt1 00:13:00.512 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.513 "name": "raid_bdev1", 00:13:00.513 "uuid": "33ef0b3d-e97f-45cb-8065-18117015e0e7", 00:13:00.513 "strip_size_kb": 64, 00:13:00.513 "state": "configuring", 00:13:00.513 "raid_level": "raid0", 00:13:00.513 "superblock": true, 00:13:00.513 "num_base_bdevs": 3, 00:13:00.513 "num_base_bdevs_discovered": 1, 00:13:00.513 "num_base_bdevs_operational": 3, 00:13:00.513 "base_bdevs_list": [ 00:13:00.513 { 00:13:00.513 "name": "pt1", 00:13:00.513 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:00.513 "is_configured": true, 00:13:00.513 "data_offset": 2048, 00:13:00.513 "data_size": 63488 00:13:00.513 }, 00:13:00.513 { 00:13:00.513 "name": null, 00:13:00.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.513 "is_configured": false, 00:13:00.513 "data_offset": 2048, 00:13:00.513 "data_size": 63488 00:13:00.513 }, 00:13:00.513 { 00:13:00.513 "name": null, 00:13:00.513 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.513 "is_configured": false, 00:13:00.513 "data_offset": 2048, 00:13:00.513 "data_size": 63488 00:13:00.513 } 00:13:00.513 ] 00:13:00.513 }' 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.513 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.080 [2024-11-27 14:12:31.742096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:01.080 [2024-11-27 14:12:31.742249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.080 [2024-11-27 14:12:31.742302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:01.080 [2024-11-27 14:12:31.742313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.080 [2024-11-27 14:12:31.742793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.080 [2024-11-27 14:12:31.742810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:01.080 [2024-11-27 14:12:31.742896] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:01.080 [2024-11-27 14:12:31.742925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:01.080 pt2 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.080 [2024-11-27 14:12:31.754109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.080 "name": "raid_bdev1", 00:13:01.080 "uuid": "33ef0b3d-e97f-45cb-8065-18117015e0e7", 00:13:01.080 "strip_size_kb": 64, 00:13:01.080 "state": "configuring", 00:13:01.080 "raid_level": "raid0", 00:13:01.080 "superblock": true, 00:13:01.080 "num_base_bdevs": 3, 00:13:01.080 "num_base_bdevs_discovered": 1, 00:13:01.080 "num_base_bdevs_operational": 3, 00:13:01.080 "base_bdevs_list": [ 00:13:01.080 { 00:13:01.080 "name": "pt1", 00:13:01.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:01.080 "is_configured": true, 00:13:01.080 "data_offset": 2048, 00:13:01.080 "data_size": 63488 00:13:01.080 }, 00:13:01.080 { 00:13:01.080 "name": null, 00:13:01.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.080 "is_configured": false, 00:13:01.080 "data_offset": 0, 00:13:01.080 "data_size": 63488 00:13:01.080 }, 00:13:01.080 { 00:13:01.080 "name": null, 00:13:01.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.080 "is_configured": false, 00:13:01.080 "data_offset": 2048, 00:13:01.080 "data_size": 63488 00:13:01.080 } 00:13:01.080 ] 00:13:01.080 }' 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.080 14:12:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.339 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:01.339 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:01.339 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:01.339 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.339 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.339 [2024-11-27 14:12:32.205351] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:01.339 [2024-11-27 14:12:32.205481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.339 [2024-11-27 14:12:32.205542] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:01.339 [2024-11-27 14:12:32.205584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.339 [2024-11-27 14:12:32.206114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.339 [2024-11-27 14:12:32.206201] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:01.339 [2024-11-27 14:12:32.206328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:01.339 [2024-11-27 14:12:32.206387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:01.339 pt2 00:13:01.339 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.339 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:01.339 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:01.339 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:01.339 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.339 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.339 [2024-11-27 14:12:32.217296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:01.340 [2024-11-27 14:12:32.217397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.340 [2024-11-27 14:12:32.217428] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:01.340 [2024-11-27 14:12:32.217457] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.340 [2024-11-27 14:12:32.217899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.340 [2024-11-27 14:12:32.217965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:01.340 [2024-11-27 14:12:32.218063] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:01.340 [2024-11-27 14:12:32.218130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:01.340 [2024-11-27 14:12:32.218309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:01.340 [2024-11-27 14:12:32.218354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:01.340 [2024-11-27 14:12:32.218677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:01.340 [2024-11-27 14:12:32.218903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:01.340 [2024-11-27 14:12:32.218945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:01.340 [2024-11-27 14:12:32.219178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.340 pt3 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.340 "name": "raid_bdev1", 00:13:01.340 "uuid": "33ef0b3d-e97f-45cb-8065-18117015e0e7", 00:13:01.340 "strip_size_kb": 64, 00:13:01.340 "state": "online", 00:13:01.340 "raid_level": "raid0", 00:13:01.340 "superblock": true, 00:13:01.340 "num_base_bdevs": 3, 00:13:01.340 "num_base_bdevs_discovered": 3, 00:13:01.340 "num_base_bdevs_operational": 3, 00:13:01.340 "base_bdevs_list": [ 00:13:01.340 { 00:13:01.340 "name": "pt1", 00:13:01.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:01.340 "is_configured": true, 00:13:01.340 "data_offset": 2048, 00:13:01.340 "data_size": 63488 00:13:01.340 }, 00:13:01.340 { 00:13:01.340 "name": "pt2", 00:13:01.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.340 "is_configured": true, 00:13:01.340 "data_offset": 2048, 00:13:01.340 "data_size": 63488 00:13:01.340 }, 00:13:01.340 { 00:13:01.340 "name": "pt3", 00:13:01.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.340 "is_configured": true, 00:13:01.340 "data_offset": 2048, 00:13:01.340 "data_size": 63488 00:13:01.340 } 00:13:01.340 ] 00:13:01.340 }' 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.340 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.929 [2024-11-27 14:12:32.684879] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.929 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:01.929 "name": "raid_bdev1", 00:13:01.929 "aliases": [ 00:13:01.929 "33ef0b3d-e97f-45cb-8065-18117015e0e7" 00:13:01.929 ], 00:13:01.929 "product_name": "Raid Volume", 00:13:01.929 "block_size": 512, 00:13:01.929 "num_blocks": 190464, 00:13:01.929 "uuid": "33ef0b3d-e97f-45cb-8065-18117015e0e7", 00:13:01.929 "assigned_rate_limits": { 00:13:01.929 "rw_ios_per_sec": 0, 00:13:01.929 "rw_mbytes_per_sec": 0, 00:13:01.929 "r_mbytes_per_sec": 0, 00:13:01.929 "w_mbytes_per_sec": 0 00:13:01.929 }, 00:13:01.929 "claimed": false, 00:13:01.929 "zoned": false, 00:13:01.929 "supported_io_types": { 00:13:01.929 "read": true, 00:13:01.929 "write": true, 00:13:01.929 "unmap": true, 00:13:01.929 "flush": true, 00:13:01.930 "reset": true, 00:13:01.930 "nvme_admin": false, 00:13:01.930 "nvme_io": false, 00:13:01.930 "nvme_io_md": false, 00:13:01.930 "write_zeroes": true, 00:13:01.930 "zcopy": false, 00:13:01.930 "get_zone_info": false, 00:13:01.930 "zone_management": false, 00:13:01.930 "zone_append": false, 00:13:01.930 "compare": false, 00:13:01.930 "compare_and_write": false, 00:13:01.930 "abort": false, 00:13:01.930 "seek_hole": false, 00:13:01.930 "seek_data": false, 00:13:01.930 "copy": false, 00:13:01.930 "nvme_iov_md": false 00:13:01.930 }, 00:13:01.930 "memory_domains": [ 00:13:01.930 { 00:13:01.930 "dma_device_id": "system", 00:13:01.930 "dma_device_type": 1 00:13:01.930 }, 00:13:01.930 { 00:13:01.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.930 "dma_device_type": 2 00:13:01.930 }, 00:13:01.930 { 00:13:01.930 "dma_device_id": "system", 00:13:01.930 "dma_device_type": 1 00:13:01.930 }, 00:13:01.930 { 00:13:01.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.930 "dma_device_type": 2 00:13:01.930 }, 00:13:01.930 { 00:13:01.930 "dma_device_id": "system", 00:13:01.930 "dma_device_type": 1 00:13:01.930 }, 00:13:01.930 { 00:13:01.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.930 "dma_device_type": 2 00:13:01.930 } 00:13:01.930 ], 00:13:01.930 "driver_specific": { 00:13:01.930 "raid": { 00:13:01.930 "uuid": "33ef0b3d-e97f-45cb-8065-18117015e0e7", 00:13:01.930 "strip_size_kb": 64, 00:13:01.930 "state": "online", 00:13:01.930 "raid_level": "raid0", 00:13:01.930 "superblock": true, 00:13:01.930 "num_base_bdevs": 3, 00:13:01.930 "num_base_bdevs_discovered": 3, 00:13:01.930 "num_base_bdevs_operational": 3, 00:13:01.930 "base_bdevs_list": [ 00:13:01.930 { 00:13:01.930 "name": "pt1", 00:13:01.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:01.930 "is_configured": true, 00:13:01.930 "data_offset": 2048, 00:13:01.930 "data_size": 63488 00:13:01.930 }, 00:13:01.930 { 00:13:01.930 "name": "pt2", 00:13:01.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.930 "is_configured": true, 00:13:01.930 "data_offset": 2048, 00:13:01.930 "data_size": 63488 00:13:01.930 }, 00:13:01.930 { 00:13:01.930 "name": "pt3", 00:13:01.930 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.930 "is_configured": true, 00:13:01.930 "data_offset": 2048, 00:13:01.930 "data_size": 63488 00:13:01.930 } 00:13:01.930 ] 00:13:01.930 } 00:13:01.930 } 00:13:01.930 }' 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:01.930 pt2 00:13:01.930 pt3' 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.930 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.190 14:12:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.190 [2024-11-27 14:12:32.980571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 33ef0b3d-e97f-45cb-8065-18117015e0e7 '!=' 33ef0b3d-e97f-45cb-8065-18117015e0e7 ']' 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65261 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65261 ']' 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65261 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65261 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.190 killing process with pid 65261 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65261' 00:13:02.190 14:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65261 00:13:02.190 [2024-11-27 14:12:33.062931] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.190 [2024-11-27 14:12:33.063104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.190 [2024-11-27 14:12:33.063210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 14:12:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65261 00:13:02.190 ee all in destruct 00:13:02.190 [2024-11-27 14:12:33.063227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:02.452 [2024-11-27 14:12:33.390197] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.831 14:12:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:03.831 00:13:03.831 real 0m5.488s 00:13:03.831 user 0m7.913s 00:13:03.831 sys 0m0.864s 00:13:03.831 14:12:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.831 14:12:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.831 ************************************ 00:13:03.831 END TEST raid_superblock_test 00:13:03.831 ************************************ 00:13:03.831 14:12:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:13:03.831 14:12:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:03.831 14:12:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.831 14:12:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.831 ************************************ 00:13:03.831 START TEST raid_read_error_test 00:13:03.831 ************************************ 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DcIVXYC6XS 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65514 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65514 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65514 ']' 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.831 14:12:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.831 [2024-11-27 14:12:34.750846] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:03.831 [2024-11-27 14:12:34.751072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65514 ] 00:13:04.090 [2024-11-27 14:12:34.925598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.350 [2024-11-27 14:12:35.053779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.350 [2024-11-27 14:12:35.288606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.350 [2024-11-27 14:12:35.288677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.921 BaseBdev1_malloc 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.921 true 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.921 [2024-11-27 14:12:35.679585] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:04.921 [2024-11-27 14:12:35.679658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.921 [2024-11-27 14:12:35.679682] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:04.921 [2024-11-27 14:12:35.679694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.921 [2024-11-27 14:12:35.682156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.921 [2024-11-27 14:12:35.682190] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.921 BaseBdev1 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.921 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.922 BaseBdev2_malloc 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.922 true 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.922 [2024-11-27 14:12:35.748885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:04.922 [2024-11-27 14:12:35.748943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.922 [2024-11-27 14:12:35.748962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:04.922 [2024-11-27 14:12:35.748974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.922 [2024-11-27 14:12:35.751311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.922 [2024-11-27 14:12:35.751425] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:04.922 BaseBdev2 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.922 BaseBdev3_malloc 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.922 true 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.922 [2024-11-27 14:12:35.828481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:04.922 [2024-11-27 14:12:35.828545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.922 [2024-11-27 14:12:35.828569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:04.922 [2024-11-27 14:12:35.828582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.922 [2024-11-27 14:12:35.830926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.922 [2024-11-27 14:12:35.830969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:04.922 BaseBdev3 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.922 [2024-11-27 14:12:35.840536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.922 [2024-11-27 14:12:35.842571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.922 [2024-11-27 14:12:35.842650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.922 [2024-11-27 14:12:35.842875] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:04.922 [2024-11-27 14:12:35.842890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:04.922 [2024-11-27 14:12:35.843166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:04.922 [2024-11-27 14:12:35.843350] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:04.922 [2024-11-27 14:12:35.843364] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:04.922 [2024-11-27 14:12:35.843528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.922 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.181 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.181 "name": "raid_bdev1", 00:13:05.181 "uuid": "c4e88ff2-37ab-4182-8db5-d10815d365aa", 00:13:05.181 "strip_size_kb": 64, 00:13:05.181 "state": "online", 00:13:05.181 "raid_level": "raid0", 00:13:05.181 "superblock": true, 00:13:05.181 "num_base_bdevs": 3, 00:13:05.181 "num_base_bdevs_discovered": 3, 00:13:05.181 "num_base_bdevs_operational": 3, 00:13:05.181 "base_bdevs_list": [ 00:13:05.181 { 00:13:05.181 "name": "BaseBdev1", 00:13:05.181 "uuid": "3a4a2c4b-f1e1-5970-85e3-228fdd93cdc2", 00:13:05.181 "is_configured": true, 00:13:05.181 "data_offset": 2048, 00:13:05.181 "data_size": 63488 00:13:05.181 }, 00:13:05.181 { 00:13:05.181 "name": "BaseBdev2", 00:13:05.181 "uuid": "7d3e0702-8396-5fb5-aade-67988abff4ed", 00:13:05.181 "is_configured": true, 00:13:05.181 "data_offset": 2048, 00:13:05.181 "data_size": 63488 00:13:05.181 }, 00:13:05.181 { 00:13:05.181 "name": "BaseBdev3", 00:13:05.181 "uuid": "03f701d7-4767-5671-a6f6-b4f15bd79f4c", 00:13:05.181 "is_configured": true, 00:13:05.181 "data_offset": 2048, 00:13:05.181 "data_size": 63488 00:13:05.181 } 00:13:05.181 ] 00:13:05.181 }' 00:13:05.181 14:12:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.181 14:12:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.440 14:12:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:05.440 14:12:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:05.699 [2024-11-27 14:12:36.397442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:06.638 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:06.638 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.638 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.638 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.638 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.639 "name": "raid_bdev1", 00:13:06.639 "uuid": "c4e88ff2-37ab-4182-8db5-d10815d365aa", 00:13:06.639 "strip_size_kb": 64, 00:13:06.639 "state": "online", 00:13:06.639 "raid_level": "raid0", 00:13:06.639 "superblock": true, 00:13:06.639 "num_base_bdevs": 3, 00:13:06.639 "num_base_bdevs_discovered": 3, 00:13:06.639 "num_base_bdevs_operational": 3, 00:13:06.639 "base_bdevs_list": [ 00:13:06.639 { 00:13:06.639 "name": "BaseBdev1", 00:13:06.639 "uuid": "3a4a2c4b-f1e1-5970-85e3-228fdd93cdc2", 00:13:06.639 "is_configured": true, 00:13:06.639 "data_offset": 2048, 00:13:06.639 "data_size": 63488 00:13:06.639 }, 00:13:06.639 { 00:13:06.639 "name": "BaseBdev2", 00:13:06.639 "uuid": "7d3e0702-8396-5fb5-aade-67988abff4ed", 00:13:06.639 "is_configured": true, 00:13:06.639 "data_offset": 2048, 00:13:06.639 "data_size": 63488 00:13:06.639 }, 00:13:06.639 { 00:13:06.639 "name": "BaseBdev3", 00:13:06.639 "uuid": "03f701d7-4767-5671-a6f6-b4f15bd79f4c", 00:13:06.639 "is_configured": true, 00:13:06.639 "data_offset": 2048, 00:13:06.639 "data_size": 63488 00:13:06.639 } 00:13:06.639 ] 00:13:06.639 }' 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.639 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.899 [2024-11-27 14:12:37.750029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.899 [2024-11-27 14:12:37.750060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.899 [2024-11-27 14:12:37.752976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.899 [2024-11-27 14:12:37.753101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.899 [2024-11-27 14:12:37.753168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.899 [2024-11-27 14:12:37.753181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:06.899 { 00:13:06.899 "results": [ 00:13:06.899 { 00:13:06.899 "job": "raid_bdev1", 00:13:06.899 "core_mask": "0x1", 00:13:06.899 "workload": "randrw", 00:13:06.899 "percentage": 50, 00:13:06.899 "status": "finished", 00:13:06.899 "queue_depth": 1, 00:13:06.899 "io_size": 131072, 00:13:06.899 "runtime": 1.353384, 00:13:06.899 "iops": 14547.977514142327, 00:13:06.899 "mibps": 1818.4971892677909, 00:13:06.899 "io_failed": 1, 00:13:06.899 "io_timeout": 0, 00:13:06.899 "avg_latency_us": 95.27512815451729, 00:13:06.899 "min_latency_us": 20.34585152838428, 00:13:06.899 "max_latency_us": 1609.7816593886462 00:13:06.899 } 00:13:06.899 ], 00:13:06.899 "core_count": 1 00:13:06.899 } 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65514 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65514 ']' 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65514 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65514 00:13:06.899 killing process with pid 65514 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65514' 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65514 00:13:06.899 [2024-11-27 14:12:37.799698] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.899 14:12:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65514 00:13:07.159 [2024-11-27 14:12:38.039890] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.537 14:12:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DcIVXYC6XS 00:13:08.537 14:12:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:08.537 14:12:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:08.537 14:12:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:13:08.537 14:12:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:08.537 14:12:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.537 14:12:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:08.537 14:12:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:13:08.537 00:13:08.537 real 0m4.648s 00:13:08.537 user 0m5.537s 00:13:08.537 sys 0m0.555s 00:13:08.537 14:12:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.537 14:12:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.537 ************************************ 00:13:08.537 END TEST raid_read_error_test 00:13:08.537 ************************************ 00:13:08.537 14:12:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:13:08.537 14:12:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:08.537 14:12:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.537 14:12:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.537 ************************************ 00:13:08.537 START TEST raid_write_error_test 00:13:08.537 ************************************ 00:13:08.537 14:12:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:13:08.537 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:08.537 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:08.537 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:08.537 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:08.537 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.537 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dFHK7pRLoc 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65661 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65661 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65661 ']' 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.538 14:12:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.538 [2024-11-27 14:12:39.471585] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:08.538 [2024-11-27 14:12:39.471767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65661 ] 00:13:08.810 [2024-11-27 14:12:39.646142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.070 [2024-11-27 14:12:39.771607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.070 [2024-11-27 14:12:39.993152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.070 [2024-11-27 14:12:39.993319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.639 BaseBdev1_malloc 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.639 true 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:09.639 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.640 [2024-11-27 14:12:40.405420] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:09.640 [2024-11-27 14:12:40.405540] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.640 [2024-11-27 14:12:40.405565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:09.640 [2024-11-27 14:12:40.405577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.640 [2024-11-27 14:12:40.407994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.640 [2024-11-27 14:12:40.408039] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.640 BaseBdev1 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.640 BaseBdev2_malloc 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.640 true 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.640 [2024-11-27 14:12:40.468046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:09.640 [2024-11-27 14:12:40.468112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.640 [2024-11-27 14:12:40.468164] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:09.640 [2024-11-27 14:12:40.468176] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.640 [2024-11-27 14:12:40.470461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.640 [2024-11-27 14:12:40.470543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.640 BaseBdev2 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.640 BaseBdev3_malloc 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.640 true 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.640 [2024-11-27 14:12:40.539186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:09.640 [2024-11-27 14:12:40.539299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.640 [2024-11-27 14:12:40.539326] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:09.640 [2024-11-27 14:12:40.539339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.640 [2024-11-27 14:12:40.541841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.640 [2024-11-27 14:12:40.541890] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:09.640 BaseBdev3 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.640 [2024-11-27 14:12:40.551265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.640 [2024-11-27 14:12:40.553361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.640 [2024-11-27 14:12:40.553449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.640 [2024-11-27 14:12:40.553684] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:09.640 [2024-11-27 14:12:40.553700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:09.640 [2024-11-27 14:12:40.553996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:09.640 [2024-11-27 14:12:40.554216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:09.640 [2024-11-27 14:12:40.554243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:09.640 [2024-11-27 14:12:40.554472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.640 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.899 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.899 "name": "raid_bdev1", 00:13:09.899 "uuid": "bca896ab-5abe-4f84-9bbb-dcc56e07e4eb", 00:13:09.899 "strip_size_kb": 64, 00:13:09.899 "state": "online", 00:13:09.899 "raid_level": "raid0", 00:13:09.899 "superblock": true, 00:13:09.899 "num_base_bdevs": 3, 00:13:09.899 "num_base_bdevs_discovered": 3, 00:13:09.899 "num_base_bdevs_operational": 3, 00:13:09.899 "base_bdevs_list": [ 00:13:09.899 { 00:13:09.899 "name": "BaseBdev1", 00:13:09.899 "uuid": "5daf16d3-57bd-5c25-9e2d-31aea99b6bcc", 00:13:09.899 "is_configured": true, 00:13:09.899 "data_offset": 2048, 00:13:09.899 "data_size": 63488 00:13:09.899 }, 00:13:09.899 { 00:13:09.899 "name": "BaseBdev2", 00:13:09.899 "uuid": "780abd4f-eb76-5b9e-abcf-f835ab2214df", 00:13:09.899 "is_configured": true, 00:13:09.899 "data_offset": 2048, 00:13:09.899 "data_size": 63488 00:13:09.899 }, 00:13:09.899 { 00:13:09.899 "name": "BaseBdev3", 00:13:09.899 "uuid": "b99d56ea-8b3d-5bcc-908e-350e8209e66c", 00:13:09.899 "is_configured": true, 00:13:09.899 "data_offset": 2048, 00:13:09.899 "data_size": 63488 00:13:09.899 } 00:13:09.899 ] 00:13:09.899 }' 00:13:09.899 14:12:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.899 14:12:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.157 14:12:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:10.157 14:12:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:10.416 [2024-11-27 14:12:41.135723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.356 "name": "raid_bdev1", 00:13:11.356 "uuid": "bca896ab-5abe-4f84-9bbb-dcc56e07e4eb", 00:13:11.356 "strip_size_kb": 64, 00:13:11.356 "state": "online", 00:13:11.356 "raid_level": "raid0", 00:13:11.356 "superblock": true, 00:13:11.356 "num_base_bdevs": 3, 00:13:11.356 "num_base_bdevs_discovered": 3, 00:13:11.356 "num_base_bdevs_operational": 3, 00:13:11.356 "base_bdevs_list": [ 00:13:11.356 { 00:13:11.356 "name": "BaseBdev1", 00:13:11.356 "uuid": "5daf16d3-57bd-5c25-9e2d-31aea99b6bcc", 00:13:11.356 "is_configured": true, 00:13:11.356 "data_offset": 2048, 00:13:11.356 "data_size": 63488 00:13:11.356 }, 00:13:11.356 { 00:13:11.356 "name": "BaseBdev2", 00:13:11.356 "uuid": "780abd4f-eb76-5b9e-abcf-f835ab2214df", 00:13:11.356 "is_configured": true, 00:13:11.356 "data_offset": 2048, 00:13:11.356 "data_size": 63488 00:13:11.356 }, 00:13:11.356 { 00:13:11.356 "name": "BaseBdev3", 00:13:11.356 "uuid": "b99d56ea-8b3d-5bcc-908e-350e8209e66c", 00:13:11.356 "is_configured": true, 00:13:11.356 "data_offset": 2048, 00:13:11.356 "data_size": 63488 00:13:11.356 } 00:13:11.356 ] 00:13:11.356 }' 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.356 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.616 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.616 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.616 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.616 [2024-11-27 14:12:42.553402] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.616 [2024-11-27 14:12:42.553517] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.616 [2024-11-27 14:12:42.556759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.616 [2024-11-27 14:12:42.556860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.616 [2024-11-27 14:12:42.556927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.616 [2024-11-27 14:12:42.556982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:11.616 { 00:13:11.616 "results": [ 00:13:11.616 { 00:13:11.616 "job": "raid_bdev1", 00:13:11.616 "core_mask": "0x1", 00:13:11.616 "workload": "randrw", 00:13:11.616 "percentage": 50, 00:13:11.616 "status": "finished", 00:13:11.616 "queue_depth": 1, 00:13:11.616 "io_size": 131072, 00:13:11.616 "runtime": 1.41853, 00:13:11.616 "iops": 13683.179065652472, 00:13:11.616 "mibps": 1710.397383206559, 00:13:11.616 "io_failed": 1, 00:13:11.616 "io_timeout": 0, 00:13:11.616 "avg_latency_us": 101.11257907831039, 00:13:11.616 "min_latency_us": 21.910917030567685, 00:13:11.616 "max_latency_us": 1659.8637554585152 00:13:11.616 } 00:13:11.616 ], 00:13:11.616 "core_count": 1 00:13:11.616 } 00:13:11.616 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.616 14:12:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65661 00:13:11.616 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65661 ']' 00:13:11.616 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65661 00:13:11.616 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:11.875 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.875 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65661 00:13:11.875 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.875 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.875 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65661' 00:13:11.875 killing process with pid 65661 00:13:11.875 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65661 00:13:11.875 [2024-11-27 14:12:42.598576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.875 14:12:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65661 00:13:12.133 [2024-11-27 14:12:42.875172] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.514 14:12:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dFHK7pRLoc 00:13:13.514 14:12:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:13.514 14:12:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:13.514 ************************************ 00:13:13.514 END TEST raid_write_error_test 00:13:13.514 ************************************ 00:13:13.514 14:12:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:13.514 14:12:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:13.514 14:12:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.514 14:12:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:13.514 14:12:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:13.514 00:13:13.514 real 0m4.888s 00:13:13.514 user 0m5.853s 00:13:13.514 sys 0m0.577s 00:13:13.514 14:12:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.514 14:12:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.514 14:12:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:13.514 14:12:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:13:13.514 14:12:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:13.514 14:12:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.514 14:12:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.514 ************************************ 00:13:13.514 START TEST raid_state_function_test 00:13:13.514 ************************************ 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:13.514 Process raid pid: 65803 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65803 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65803' 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65803 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65803 ']' 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.514 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.514 [2024-11-27 14:12:44.431702] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:13.514 [2024-11-27 14:12:44.432019] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.773 [2024-11-27 14:12:44.607517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.032 [2024-11-27 14:12:44.767067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.291 [2024-11-27 14:12:44.987337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.291 [2024-11-27 14:12:44.987472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.550 [2024-11-27 14:12:45.298322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.550 [2024-11-27 14:12:45.298390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.550 [2024-11-27 14:12:45.298403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.550 [2024-11-27 14:12:45.298414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.550 [2024-11-27 14:12:45.298421] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.550 [2024-11-27 14:12:45.298431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.550 "name": "Existed_Raid", 00:13:14.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.550 "strip_size_kb": 64, 00:13:14.550 "state": "configuring", 00:13:14.550 "raid_level": "concat", 00:13:14.550 "superblock": false, 00:13:14.550 "num_base_bdevs": 3, 00:13:14.550 "num_base_bdevs_discovered": 0, 00:13:14.550 "num_base_bdevs_operational": 3, 00:13:14.550 "base_bdevs_list": [ 00:13:14.550 { 00:13:14.550 "name": "BaseBdev1", 00:13:14.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.550 "is_configured": false, 00:13:14.550 "data_offset": 0, 00:13:14.550 "data_size": 0 00:13:14.550 }, 00:13:14.550 { 00:13:14.550 "name": "BaseBdev2", 00:13:14.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.550 "is_configured": false, 00:13:14.550 "data_offset": 0, 00:13:14.550 "data_size": 0 00:13:14.550 }, 00:13:14.550 { 00:13:14.550 "name": "BaseBdev3", 00:13:14.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.550 "is_configured": false, 00:13:14.550 "data_offset": 0, 00:13:14.550 "data_size": 0 00:13:14.550 } 00:13:14.550 ] 00:13:14.550 }' 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.550 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.809 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:14.809 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.809 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.809 [2024-11-27 14:12:45.701720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.809 [2024-11-27 14:12:45.701818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:14.809 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.809 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:14.809 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.809 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.809 [2024-11-27 14:12:45.713573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.809 [2024-11-27 14:12:45.713706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.809 [2024-11-27 14:12:45.713766] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.809 [2024-11-27 14:12:45.713801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.809 [2024-11-27 14:12:45.713851] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.809 [2024-11-27 14:12:45.713889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.809 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.809 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:14.809 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.809 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.095 [2024-11-27 14:12:45.765374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.095 BaseBdev1 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.095 [ 00:13:15.095 { 00:13:15.095 "name": "BaseBdev1", 00:13:15.095 "aliases": [ 00:13:15.095 "a25e3ddd-59c8-48e6-85c3-0fb828bb4e31" 00:13:15.095 ], 00:13:15.095 "product_name": "Malloc disk", 00:13:15.095 "block_size": 512, 00:13:15.095 "num_blocks": 65536, 00:13:15.095 "uuid": "a25e3ddd-59c8-48e6-85c3-0fb828bb4e31", 00:13:15.095 "assigned_rate_limits": { 00:13:15.095 "rw_ios_per_sec": 0, 00:13:15.095 "rw_mbytes_per_sec": 0, 00:13:15.095 "r_mbytes_per_sec": 0, 00:13:15.095 "w_mbytes_per_sec": 0 00:13:15.095 }, 00:13:15.095 "claimed": true, 00:13:15.095 "claim_type": "exclusive_write", 00:13:15.095 "zoned": false, 00:13:15.095 "supported_io_types": { 00:13:15.095 "read": true, 00:13:15.095 "write": true, 00:13:15.095 "unmap": true, 00:13:15.095 "flush": true, 00:13:15.095 "reset": true, 00:13:15.095 "nvme_admin": false, 00:13:15.095 "nvme_io": false, 00:13:15.095 "nvme_io_md": false, 00:13:15.095 "write_zeroes": true, 00:13:15.095 "zcopy": true, 00:13:15.095 "get_zone_info": false, 00:13:15.095 "zone_management": false, 00:13:15.095 "zone_append": false, 00:13:15.095 "compare": false, 00:13:15.095 "compare_and_write": false, 00:13:15.095 "abort": true, 00:13:15.095 "seek_hole": false, 00:13:15.095 "seek_data": false, 00:13:15.095 "copy": true, 00:13:15.095 "nvme_iov_md": false 00:13:15.095 }, 00:13:15.095 "memory_domains": [ 00:13:15.095 { 00:13:15.095 "dma_device_id": "system", 00:13:15.095 "dma_device_type": 1 00:13:15.095 }, 00:13:15.095 { 00:13:15.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.095 "dma_device_type": 2 00:13:15.095 } 00:13:15.095 ], 00:13:15.095 "driver_specific": {} 00:13:15.095 } 00:13:15.095 ] 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.095 "name": "Existed_Raid", 00:13:15.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.095 "strip_size_kb": 64, 00:13:15.095 "state": "configuring", 00:13:15.095 "raid_level": "concat", 00:13:15.095 "superblock": false, 00:13:15.095 "num_base_bdevs": 3, 00:13:15.095 "num_base_bdevs_discovered": 1, 00:13:15.095 "num_base_bdevs_operational": 3, 00:13:15.095 "base_bdevs_list": [ 00:13:15.095 { 00:13:15.095 "name": "BaseBdev1", 00:13:15.095 "uuid": "a25e3ddd-59c8-48e6-85c3-0fb828bb4e31", 00:13:15.095 "is_configured": true, 00:13:15.095 "data_offset": 0, 00:13:15.095 "data_size": 65536 00:13:15.095 }, 00:13:15.095 { 00:13:15.095 "name": "BaseBdev2", 00:13:15.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.095 "is_configured": false, 00:13:15.095 "data_offset": 0, 00:13:15.095 "data_size": 0 00:13:15.095 }, 00:13:15.095 { 00:13:15.095 "name": "BaseBdev3", 00:13:15.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.095 "is_configured": false, 00:13:15.095 "data_offset": 0, 00:13:15.095 "data_size": 0 00:13:15.095 } 00:13:15.095 ] 00:13:15.095 }' 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.095 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.355 [2024-11-27 14:12:46.256651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:15.355 [2024-11-27 14:12:46.256713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.355 [2024-11-27 14:12:46.268675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.355 [2024-11-27 14:12:46.270725] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.355 [2024-11-27 14:12:46.270773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.355 [2024-11-27 14:12:46.270785] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:15.355 [2024-11-27 14:12:46.270795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.355 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.616 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.616 "name": "Existed_Raid", 00:13:15.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.616 "strip_size_kb": 64, 00:13:15.616 "state": "configuring", 00:13:15.616 "raid_level": "concat", 00:13:15.616 "superblock": false, 00:13:15.616 "num_base_bdevs": 3, 00:13:15.616 "num_base_bdevs_discovered": 1, 00:13:15.616 "num_base_bdevs_operational": 3, 00:13:15.616 "base_bdevs_list": [ 00:13:15.616 { 00:13:15.616 "name": "BaseBdev1", 00:13:15.616 "uuid": "a25e3ddd-59c8-48e6-85c3-0fb828bb4e31", 00:13:15.616 "is_configured": true, 00:13:15.616 "data_offset": 0, 00:13:15.616 "data_size": 65536 00:13:15.616 }, 00:13:15.616 { 00:13:15.616 "name": "BaseBdev2", 00:13:15.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.616 "is_configured": false, 00:13:15.616 "data_offset": 0, 00:13:15.616 "data_size": 0 00:13:15.616 }, 00:13:15.616 { 00:13:15.616 "name": "BaseBdev3", 00:13:15.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.616 "is_configured": false, 00:13:15.616 "data_offset": 0, 00:13:15.616 "data_size": 0 00:13:15.616 } 00:13:15.616 ] 00:13:15.616 }' 00:13:15.616 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.616 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.877 [2024-11-27 14:12:46.812482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.877 BaseBdev2 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.877 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.143 [ 00:13:16.143 { 00:13:16.143 "name": "BaseBdev2", 00:13:16.143 "aliases": [ 00:13:16.143 "843a3038-b713-4412-8e75-109d5e918b0d" 00:13:16.143 ], 00:13:16.143 "product_name": "Malloc disk", 00:13:16.143 "block_size": 512, 00:13:16.143 "num_blocks": 65536, 00:13:16.143 "uuid": "843a3038-b713-4412-8e75-109d5e918b0d", 00:13:16.143 "assigned_rate_limits": { 00:13:16.143 "rw_ios_per_sec": 0, 00:13:16.143 "rw_mbytes_per_sec": 0, 00:13:16.143 "r_mbytes_per_sec": 0, 00:13:16.143 "w_mbytes_per_sec": 0 00:13:16.143 }, 00:13:16.143 "claimed": true, 00:13:16.143 "claim_type": "exclusive_write", 00:13:16.143 "zoned": false, 00:13:16.143 "supported_io_types": { 00:13:16.143 "read": true, 00:13:16.143 "write": true, 00:13:16.143 "unmap": true, 00:13:16.143 "flush": true, 00:13:16.143 "reset": true, 00:13:16.143 "nvme_admin": false, 00:13:16.143 "nvme_io": false, 00:13:16.143 "nvme_io_md": false, 00:13:16.143 "write_zeroes": true, 00:13:16.143 "zcopy": true, 00:13:16.143 "get_zone_info": false, 00:13:16.143 "zone_management": false, 00:13:16.143 "zone_append": false, 00:13:16.143 "compare": false, 00:13:16.143 "compare_and_write": false, 00:13:16.143 "abort": true, 00:13:16.143 "seek_hole": false, 00:13:16.143 "seek_data": false, 00:13:16.143 "copy": true, 00:13:16.143 "nvme_iov_md": false 00:13:16.143 }, 00:13:16.143 "memory_domains": [ 00:13:16.143 { 00:13:16.143 "dma_device_id": "system", 00:13:16.143 "dma_device_type": 1 00:13:16.143 }, 00:13:16.143 { 00:13:16.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.143 "dma_device_type": 2 00:13:16.143 } 00:13:16.143 ], 00:13:16.143 "driver_specific": {} 00:13:16.143 } 00:13:16.143 ] 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.143 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.143 "name": "Existed_Raid", 00:13:16.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.143 "strip_size_kb": 64, 00:13:16.143 "state": "configuring", 00:13:16.143 "raid_level": "concat", 00:13:16.143 "superblock": false, 00:13:16.143 "num_base_bdevs": 3, 00:13:16.143 "num_base_bdevs_discovered": 2, 00:13:16.143 "num_base_bdevs_operational": 3, 00:13:16.143 "base_bdevs_list": [ 00:13:16.143 { 00:13:16.143 "name": "BaseBdev1", 00:13:16.143 "uuid": "a25e3ddd-59c8-48e6-85c3-0fb828bb4e31", 00:13:16.143 "is_configured": true, 00:13:16.143 "data_offset": 0, 00:13:16.143 "data_size": 65536 00:13:16.143 }, 00:13:16.143 { 00:13:16.143 "name": "BaseBdev2", 00:13:16.143 "uuid": "843a3038-b713-4412-8e75-109d5e918b0d", 00:13:16.143 "is_configured": true, 00:13:16.143 "data_offset": 0, 00:13:16.143 "data_size": 65536 00:13:16.143 }, 00:13:16.143 { 00:13:16.143 "name": "BaseBdev3", 00:13:16.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.144 "is_configured": false, 00:13:16.144 "data_offset": 0, 00:13:16.144 "data_size": 0 00:13:16.144 } 00:13:16.144 ] 00:13:16.144 }' 00:13:16.144 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.144 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.408 [2024-11-27 14:12:47.350319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.408 [2024-11-27 14:12:47.350382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:16.408 [2024-11-27 14:12:47.350397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:16.408 [2024-11-27 14:12:47.350731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:16.408 [2024-11-27 14:12:47.350938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:16.408 [2024-11-27 14:12:47.350952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:16.408 [2024-11-27 14:12:47.351317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.408 BaseBdev3 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.408 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.672 [ 00:13:16.672 { 00:13:16.672 "name": "BaseBdev3", 00:13:16.672 "aliases": [ 00:13:16.672 "1c55d91b-8dcf-4586-8151-30f2a6456a7e" 00:13:16.672 ], 00:13:16.672 "product_name": "Malloc disk", 00:13:16.672 "block_size": 512, 00:13:16.672 "num_blocks": 65536, 00:13:16.672 "uuid": "1c55d91b-8dcf-4586-8151-30f2a6456a7e", 00:13:16.672 "assigned_rate_limits": { 00:13:16.672 "rw_ios_per_sec": 0, 00:13:16.672 "rw_mbytes_per_sec": 0, 00:13:16.672 "r_mbytes_per_sec": 0, 00:13:16.672 "w_mbytes_per_sec": 0 00:13:16.672 }, 00:13:16.672 "claimed": true, 00:13:16.672 "claim_type": "exclusive_write", 00:13:16.672 "zoned": false, 00:13:16.672 "supported_io_types": { 00:13:16.672 "read": true, 00:13:16.672 "write": true, 00:13:16.672 "unmap": true, 00:13:16.672 "flush": true, 00:13:16.672 "reset": true, 00:13:16.672 "nvme_admin": false, 00:13:16.672 "nvme_io": false, 00:13:16.672 "nvme_io_md": false, 00:13:16.672 "write_zeroes": true, 00:13:16.672 "zcopy": true, 00:13:16.672 "get_zone_info": false, 00:13:16.672 "zone_management": false, 00:13:16.672 "zone_append": false, 00:13:16.672 "compare": false, 00:13:16.672 "compare_and_write": false, 00:13:16.672 "abort": true, 00:13:16.672 "seek_hole": false, 00:13:16.672 "seek_data": false, 00:13:16.672 "copy": true, 00:13:16.672 "nvme_iov_md": false 00:13:16.672 }, 00:13:16.672 "memory_domains": [ 00:13:16.672 { 00:13:16.672 "dma_device_id": "system", 00:13:16.672 "dma_device_type": 1 00:13:16.672 }, 00:13:16.672 { 00:13:16.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.672 "dma_device_type": 2 00:13:16.672 } 00:13:16.672 ], 00:13:16.672 "driver_specific": {} 00:13:16.672 } 00:13:16.672 ] 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.672 "name": "Existed_Raid", 00:13:16.672 "uuid": "9583248e-2d55-4497-bf59-e7fb9df71a9e", 00:13:16.672 "strip_size_kb": 64, 00:13:16.672 "state": "online", 00:13:16.672 "raid_level": "concat", 00:13:16.672 "superblock": false, 00:13:16.672 "num_base_bdevs": 3, 00:13:16.672 "num_base_bdevs_discovered": 3, 00:13:16.672 "num_base_bdevs_operational": 3, 00:13:16.672 "base_bdevs_list": [ 00:13:16.672 { 00:13:16.672 "name": "BaseBdev1", 00:13:16.672 "uuid": "a25e3ddd-59c8-48e6-85c3-0fb828bb4e31", 00:13:16.672 "is_configured": true, 00:13:16.672 "data_offset": 0, 00:13:16.672 "data_size": 65536 00:13:16.672 }, 00:13:16.672 { 00:13:16.672 "name": "BaseBdev2", 00:13:16.672 "uuid": "843a3038-b713-4412-8e75-109d5e918b0d", 00:13:16.672 "is_configured": true, 00:13:16.672 "data_offset": 0, 00:13:16.672 "data_size": 65536 00:13:16.672 }, 00:13:16.672 { 00:13:16.672 "name": "BaseBdev3", 00:13:16.672 "uuid": "1c55d91b-8dcf-4586-8151-30f2a6456a7e", 00:13:16.672 "is_configured": true, 00:13:16.672 "data_offset": 0, 00:13:16.672 "data_size": 65536 00:13:16.672 } 00:13:16.672 ] 00:13:16.672 }' 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.672 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.936 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:16.936 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:16.936 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:16.936 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:16.936 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:16.936 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:16.936 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:16.936 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:16.936 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.936 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.936 [2024-11-27 14:12:47.873878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.203 14:12:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.203 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:17.203 "name": "Existed_Raid", 00:13:17.203 "aliases": [ 00:13:17.203 "9583248e-2d55-4497-bf59-e7fb9df71a9e" 00:13:17.203 ], 00:13:17.203 "product_name": "Raid Volume", 00:13:17.203 "block_size": 512, 00:13:17.203 "num_blocks": 196608, 00:13:17.203 "uuid": "9583248e-2d55-4497-bf59-e7fb9df71a9e", 00:13:17.203 "assigned_rate_limits": { 00:13:17.203 "rw_ios_per_sec": 0, 00:13:17.203 "rw_mbytes_per_sec": 0, 00:13:17.203 "r_mbytes_per_sec": 0, 00:13:17.203 "w_mbytes_per_sec": 0 00:13:17.203 }, 00:13:17.203 "claimed": false, 00:13:17.203 "zoned": false, 00:13:17.203 "supported_io_types": { 00:13:17.203 "read": true, 00:13:17.203 "write": true, 00:13:17.203 "unmap": true, 00:13:17.203 "flush": true, 00:13:17.203 "reset": true, 00:13:17.203 "nvme_admin": false, 00:13:17.203 "nvme_io": false, 00:13:17.203 "nvme_io_md": false, 00:13:17.203 "write_zeroes": true, 00:13:17.203 "zcopy": false, 00:13:17.203 "get_zone_info": false, 00:13:17.203 "zone_management": false, 00:13:17.203 "zone_append": false, 00:13:17.203 "compare": false, 00:13:17.203 "compare_and_write": false, 00:13:17.203 "abort": false, 00:13:17.203 "seek_hole": false, 00:13:17.203 "seek_data": false, 00:13:17.203 "copy": false, 00:13:17.203 "nvme_iov_md": false 00:13:17.203 }, 00:13:17.203 "memory_domains": [ 00:13:17.203 { 00:13:17.203 "dma_device_id": "system", 00:13:17.203 "dma_device_type": 1 00:13:17.203 }, 00:13:17.203 { 00:13:17.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.203 "dma_device_type": 2 00:13:17.203 }, 00:13:17.203 { 00:13:17.203 "dma_device_id": "system", 00:13:17.203 "dma_device_type": 1 00:13:17.203 }, 00:13:17.203 { 00:13:17.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.203 "dma_device_type": 2 00:13:17.203 }, 00:13:17.203 { 00:13:17.203 "dma_device_id": "system", 00:13:17.203 "dma_device_type": 1 00:13:17.203 }, 00:13:17.203 { 00:13:17.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.203 "dma_device_type": 2 00:13:17.203 } 00:13:17.203 ], 00:13:17.203 "driver_specific": { 00:13:17.203 "raid": { 00:13:17.203 "uuid": "9583248e-2d55-4497-bf59-e7fb9df71a9e", 00:13:17.203 "strip_size_kb": 64, 00:13:17.203 "state": "online", 00:13:17.203 "raid_level": "concat", 00:13:17.203 "superblock": false, 00:13:17.203 "num_base_bdevs": 3, 00:13:17.203 "num_base_bdevs_discovered": 3, 00:13:17.203 "num_base_bdevs_operational": 3, 00:13:17.203 "base_bdevs_list": [ 00:13:17.203 { 00:13:17.203 "name": "BaseBdev1", 00:13:17.203 "uuid": "a25e3ddd-59c8-48e6-85c3-0fb828bb4e31", 00:13:17.203 "is_configured": true, 00:13:17.203 "data_offset": 0, 00:13:17.203 "data_size": 65536 00:13:17.203 }, 00:13:17.203 { 00:13:17.203 "name": "BaseBdev2", 00:13:17.203 "uuid": "843a3038-b713-4412-8e75-109d5e918b0d", 00:13:17.203 "is_configured": true, 00:13:17.203 "data_offset": 0, 00:13:17.203 "data_size": 65536 00:13:17.203 }, 00:13:17.203 { 00:13:17.203 "name": "BaseBdev3", 00:13:17.203 "uuid": "1c55d91b-8dcf-4586-8151-30f2a6456a7e", 00:13:17.203 "is_configured": true, 00:13:17.203 "data_offset": 0, 00:13:17.203 "data_size": 65536 00:13:17.203 } 00:13:17.203 ] 00:13:17.203 } 00:13:17.203 } 00:13:17.203 }' 00:13:17.203 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:17.203 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:17.203 BaseBdev2 00:13:17.203 BaseBdev3' 00:13:17.203 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.203 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.464 [2024-11-27 14:12:48.177152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:17.464 [2024-11-27 14:12:48.177238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.464 [2024-11-27 14:12:48.177326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.464 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.465 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.465 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.465 "name": "Existed_Raid", 00:13:17.465 "uuid": "9583248e-2d55-4497-bf59-e7fb9df71a9e", 00:13:17.465 "strip_size_kb": 64, 00:13:17.465 "state": "offline", 00:13:17.465 "raid_level": "concat", 00:13:17.465 "superblock": false, 00:13:17.465 "num_base_bdevs": 3, 00:13:17.465 "num_base_bdevs_discovered": 2, 00:13:17.465 "num_base_bdevs_operational": 2, 00:13:17.465 "base_bdevs_list": [ 00:13:17.465 { 00:13:17.465 "name": null, 00:13:17.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.465 "is_configured": false, 00:13:17.465 "data_offset": 0, 00:13:17.465 "data_size": 65536 00:13:17.465 }, 00:13:17.465 { 00:13:17.465 "name": "BaseBdev2", 00:13:17.465 "uuid": "843a3038-b713-4412-8e75-109d5e918b0d", 00:13:17.465 "is_configured": true, 00:13:17.465 "data_offset": 0, 00:13:17.465 "data_size": 65536 00:13:17.465 }, 00:13:17.465 { 00:13:17.465 "name": "BaseBdev3", 00:13:17.465 "uuid": "1c55d91b-8dcf-4586-8151-30f2a6456a7e", 00:13:17.465 "is_configured": true, 00:13:17.465 "data_offset": 0, 00:13:17.465 "data_size": 65536 00:13:17.465 } 00:13:17.465 ] 00:13:17.465 }' 00:13:17.465 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.465 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.035 [2024-11-27 14:12:48.786904] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.035 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:18.036 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.036 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.036 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.036 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:18.036 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:18.036 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:18.036 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.036 14:12:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.036 [2024-11-27 14:12:48.952172] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:18.036 [2024-11-27 14:12:48.952228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.297 BaseBdev2 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.297 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.298 [ 00:13:18.298 { 00:13:18.298 "name": "BaseBdev2", 00:13:18.298 "aliases": [ 00:13:18.298 "c07c3b84-9ee7-4040-b39b-15b02a12da8e" 00:13:18.298 ], 00:13:18.298 "product_name": "Malloc disk", 00:13:18.298 "block_size": 512, 00:13:18.298 "num_blocks": 65536, 00:13:18.298 "uuid": "c07c3b84-9ee7-4040-b39b-15b02a12da8e", 00:13:18.298 "assigned_rate_limits": { 00:13:18.298 "rw_ios_per_sec": 0, 00:13:18.298 "rw_mbytes_per_sec": 0, 00:13:18.298 "r_mbytes_per_sec": 0, 00:13:18.298 "w_mbytes_per_sec": 0 00:13:18.298 }, 00:13:18.298 "claimed": false, 00:13:18.298 "zoned": false, 00:13:18.298 "supported_io_types": { 00:13:18.298 "read": true, 00:13:18.298 "write": true, 00:13:18.298 "unmap": true, 00:13:18.298 "flush": true, 00:13:18.298 "reset": true, 00:13:18.298 "nvme_admin": false, 00:13:18.298 "nvme_io": false, 00:13:18.298 "nvme_io_md": false, 00:13:18.298 "write_zeroes": true, 00:13:18.298 "zcopy": true, 00:13:18.298 "get_zone_info": false, 00:13:18.298 "zone_management": false, 00:13:18.298 "zone_append": false, 00:13:18.298 "compare": false, 00:13:18.298 "compare_and_write": false, 00:13:18.298 "abort": true, 00:13:18.298 "seek_hole": false, 00:13:18.298 "seek_data": false, 00:13:18.298 "copy": true, 00:13:18.298 "nvme_iov_md": false 00:13:18.298 }, 00:13:18.298 "memory_domains": [ 00:13:18.298 { 00:13:18.298 "dma_device_id": "system", 00:13:18.298 "dma_device_type": 1 00:13:18.298 }, 00:13:18.298 { 00:13:18.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.298 "dma_device_type": 2 00:13:18.298 } 00:13:18.298 ], 00:13:18.298 "driver_specific": {} 00:13:18.298 } 00:13:18.298 ] 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.298 BaseBdev3 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.298 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.298 [ 00:13:18.556 { 00:13:18.556 "name": "BaseBdev3", 00:13:18.556 "aliases": [ 00:13:18.556 "bee92e7a-d90e-4807-b3d2-6d6183c020a4" 00:13:18.556 ], 00:13:18.556 "product_name": "Malloc disk", 00:13:18.556 "block_size": 512, 00:13:18.556 "num_blocks": 65536, 00:13:18.556 "uuid": "bee92e7a-d90e-4807-b3d2-6d6183c020a4", 00:13:18.556 "assigned_rate_limits": { 00:13:18.556 "rw_ios_per_sec": 0, 00:13:18.556 "rw_mbytes_per_sec": 0, 00:13:18.556 "r_mbytes_per_sec": 0, 00:13:18.556 "w_mbytes_per_sec": 0 00:13:18.556 }, 00:13:18.556 "claimed": false, 00:13:18.556 "zoned": false, 00:13:18.556 "supported_io_types": { 00:13:18.556 "read": true, 00:13:18.556 "write": true, 00:13:18.556 "unmap": true, 00:13:18.556 "flush": true, 00:13:18.556 "reset": true, 00:13:18.556 "nvme_admin": false, 00:13:18.556 "nvme_io": false, 00:13:18.556 "nvme_io_md": false, 00:13:18.556 "write_zeroes": true, 00:13:18.556 "zcopy": true, 00:13:18.556 "get_zone_info": false, 00:13:18.556 "zone_management": false, 00:13:18.556 "zone_append": false, 00:13:18.556 "compare": false, 00:13:18.556 "compare_and_write": false, 00:13:18.556 "abort": true, 00:13:18.556 "seek_hole": false, 00:13:18.556 "seek_data": false, 00:13:18.556 "copy": true, 00:13:18.556 "nvme_iov_md": false 00:13:18.556 }, 00:13:18.556 "memory_domains": [ 00:13:18.556 { 00:13:18.556 "dma_device_id": "system", 00:13:18.556 "dma_device_type": 1 00:13:18.556 }, 00:13:18.556 { 00:13:18.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.556 "dma_device_type": 2 00:13:18.556 } 00:13:18.556 ], 00:13:18.556 "driver_specific": {} 00:13:18.556 } 00:13:18.556 ] 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.556 [2024-11-27 14:12:49.262142] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.556 [2024-11-27 14:12:49.262251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.556 [2024-11-27 14:12:49.262301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.556 [2024-11-27 14:12:49.264327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.556 "name": "Existed_Raid", 00:13:18.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.556 "strip_size_kb": 64, 00:13:18.556 "state": "configuring", 00:13:18.556 "raid_level": "concat", 00:13:18.556 "superblock": false, 00:13:18.556 "num_base_bdevs": 3, 00:13:18.556 "num_base_bdevs_discovered": 2, 00:13:18.556 "num_base_bdevs_operational": 3, 00:13:18.556 "base_bdevs_list": [ 00:13:18.556 { 00:13:18.556 "name": "BaseBdev1", 00:13:18.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.556 "is_configured": false, 00:13:18.556 "data_offset": 0, 00:13:18.556 "data_size": 0 00:13:18.556 }, 00:13:18.556 { 00:13:18.556 "name": "BaseBdev2", 00:13:18.556 "uuid": "c07c3b84-9ee7-4040-b39b-15b02a12da8e", 00:13:18.556 "is_configured": true, 00:13:18.556 "data_offset": 0, 00:13:18.556 "data_size": 65536 00:13:18.556 }, 00:13:18.556 { 00:13:18.556 "name": "BaseBdev3", 00:13:18.556 "uuid": "bee92e7a-d90e-4807-b3d2-6d6183c020a4", 00:13:18.556 "is_configured": true, 00:13:18.556 "data_offset": 0, 00:13:18.556 "data_size": 65536 00:13:18.556 } 00:13:18.556 ] 00:13:18.556 }' 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.556 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.814 [2024-11-27 14:12:49.705453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.814 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.815 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.815 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.815 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.815 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.815 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.815 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.815 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.815 "name": "Existed_Raid", 00:13:18.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.815 "strip_size_kb": 64, 00:13:18.815 "state": "configuring", 00:13:18.815 "raid_level": "concat", 00:13:18.815 "superblock": false, 00:13:18.815 "num_base_bdevs": 3, 00:13:18.815 "num_base_bdevs_discovered": 1, 00:13:18.815 "num_base_bdevs_operational": 3, 00:13:18.815 "base_bdevs_list": [ 00:13:18.815 { 00:13:18.815 "name": "BaseBdev1", 00:13:18.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.815 "is_configured": false, 00:13:18.815 "data_offset": 0, 00:13:18.815 "data_size": 0 00:13:18.815 }, 00:13:18.815 { 00:13:18.815 "name": null, 00:13:18.815 "uuid": "c07c3b84-9ee7-4040-b39b-15b02a12da8e", 00:13:18.815 "is_configured": false, 00:13:18.815 "data_offset": 0, 00:13:18.815 "data_size": 65536 00:13:18.815 }, 00:13:18.815 { 00:13:18.815 "name": "BaseBdev3", 00:13:18.815 "uuid": "bee92e7a-d90e-4807-b3d2-6d6183c020a4", 00:13:18.815 "is_configured": true, 00:13:18.815 "data_offset": 0, 00:13:18.815 "data_size": 65536 00:13:18.815 } 00:13:18.815 ] 00:13:18.815 }' 00:13:18.815 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.815 14:12:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.384 [2024-11-27 14:12:50.200173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.384 BaseBdev1 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.384 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.384 [ 00:13:19.384 { 00:13:19.384 "name": "BaseBdev1", 00:13:19.384 "aliases": [ 00:13:19.384 "b0fa0a81-4edc-4240-97d9-6ac17e7685d1" 00:13:19.384 ], 00:13:19.384 "product_name": "Malloc disk", 00:13:19.384 "block_size": 512, 00:13:19.384 "num_blocks": 65536, 00:13:19.384 "uuid": "b0fa0a81-4edc-4240-97d9-6ac17e7685d1", 00:13:19.385 "assigned_rate_limits": { 00:13:19.385 "rw_ios_per_sec": 0, 00:13:19.385 "rw_mbytes_per_sec": 0, 00:13:19.385 "r_mbytes_per_sec": 0, 00:13:19.385 "w_mbytes_per_sec": 0 00:13:19.385 }, 00:13:19.385 "claimed": true, 00:13:19.385 "claim_type": "exclusive_write", 00:13:19.385 "zoned": false, 00:13:19.385 "supported_io_types": { 00:13:19.385 "read": true, 00:13:19.385 "write": true, 00:13:19.385 "unmap": true, 00:13:19.385 "flush": true, 00:13:19.385 "reset": true, 00:13:19.385 "nvme_admin": false, 00:13:19.385 "nvme_io": false, 00:13:19.385 "nvme_io_md": false, 00:13:19.385 "write_zeroes": true, 00:13:19.385 "zcopy": true, 00:13:19.385 "get_zone_info": false, 00:13:19.385 "zone_management": false, 00:13:19.385 "zone_append": false, 00:13:19.385 "compare": false, 00:13:19.385 "compare_and_write": false, 00:13:19.385 "abort": true, 00:13:19.385 "seek_hole": false, 00:13:19.385 "seek_data": false, 00:13:19.385 "copy": true, 00:13:19.385 "nvme_iov_md": false 00:13:19.385 }, 00:13:19.385 "memory_domains": [ 00:13:19.385 { 00:13:19.385 "dma_device_id": "system", 00:13:19.385 "dma_device_type": 1 00:13:19.385 }, 00:13:19.385 { 00:13:19.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.385 "dma_device_type": 2 00:13:19.385 } 00:13:19.385 ], 00:13:19.385 "driver_specific": {} 00:13:19.385 } 00:13:19.385 ] 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.385 "name": "Existed_Raid", 00:13:19.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.385 "strip_size_kb": 64, 00:13:19.385 "state": "configuring", 00:13:19.385 "raid_level": "concat", 00:13:19.385 "superblock": false, 00:13:19.385 "num_base_bdevs": 3, 00:13:19.385 "num_base_bdevs_discovered": 2, 00:13:19.385 "num_base_bdevs_operational": 3, 00:13:19.385 "base_bdevs_list": [ 00:13:19.385 { 00:13:19.385 "name": "BaseBdev1", 00:13:19.385 "uuid": "b0fa0a81-4edc-4240-97d9-6ac17e7685d1", 00:13:19.385 "is_configured": true, 00:13:19.385 "data_offset": 0, 00:13:19.385 "data_size": 65536 00:13:19.385 }, 00:13:19.385 { 00:13:19.385 "name": null, 00:13:19.385 "uuid": "c07c3b84-9ee7-4040-b39b-15b02a12da8e", 00:13:19.385 "is_configured": false, 00:13:19.385 "data_offset": 0, 00:13:19.385 "data_size": 65536 00:13:19.385 }, 00:13:19.385 { 00:13:19.385 "name": "BaseBdev3", 00:13:19.385 "uuid": "bee92e7a-d90e-4807-b3d2-6d6183c020a4", 00:13:19.385 "is_configured": true, 00:13:19.385 "data_offset": 0, 00:13:19.385 "data_size": 65536 00:13:19.385 } 00:13:19.385 ] 00:13:19.385 }' 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.385 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.952 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:19.952 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.952 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.953 [2024-11-27 14:12:50.719315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.953 "name": "Existed_Raid", 00:13:19.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.953 "strip_size_kb": 64, 00:13:19.953 "state": "configuring", 00:13:19.953 "raid_level": "concat", 00:13:19.953 "superblock": false, 00:13:19.953 "num_base_bdevs": 3, 00:13:19.953 "num_base_bdevs_discovered": 1, 00:13:19.953 "num_base_bdevs_operational": 3, 00:13:19.953 "base_bdevs_list": [ 00:13:19.953 { 00:13:19.953 "name": "BaseBdev1", 00:13:19.953 "uuid": "b0fa0a81-4edc-4240-97d9-6ac17e7685d1", 00:13:19.953 "is_configured": true, 00:13:19.953 "data_offset": 0, 00:13:19.953 "data_size": 65536 00:13:19.953 }, 00:13:19.953 { 00:13:19.953 "name": null, 00:13:19.953 "uuid": "c07c3b84-9ee7-4040-b39b-15b02a12da8e", 00:13:19.953 "is_configured": false, 00:13:19.953 "data_offset": 0, 00:13:19.953 "data_size": 65536 00:13:19.953 }, 00:13:19.953 { 00:13:19.953 "name": null, 00:13:19.953 "uuid": "bee92e7a-d90e-4807-b3d2-6d6183c020a4", 00:13:19.953 "is_configured": false, 00:13:19.953 "data_offset": 0, 00:13:19.953 "data_size": 65536 00:13:19.953 } 00:13:19.953 ] 00:13:19.953 }' 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.953 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.520 [2024-11-27 14:12:51.266419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.520 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.520 "name": "Existed_Raid", 00:13:20.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.520 "strip_size_kb": 64, 00:13:20.520 "state": "configuring", 00:13:20.520 "raid_level": "concat", 00:13:20.520 "superblock": false, 00:13:20.520 "num_base_bdevs": 3, 00:13:20.520 "num_base_bdevs_discovered": 2, 00:13:20.520 "num_base_bdevs_operational": 3, 00:13:20.520 "base_bdevs_list": [ 00:13:20.521 { 00:13:20.521 "name": "BaseBdev1", 00:13:20.521 "uuid": "b0fa0a81-4edc-4240-97d9-6ac17e7685d1", 00:13:20.521 "is_configured": true, 00:13:20.521 "data_offset": 0, 00:13:20.521 "data_size": 65536 00:13:20.521 }, 00:13:20.521 { 00:13:20.521 "name": null, 00:13:20.521 "uuid": "c07c3b84-9ee7-4040-b39b-15b02a12da8e", 00:13:20.521 "is_configured": false, 00:13:20.521 "data_offset": 0, 00:13:20.521 "data_size": 65536 00:13:20.521 }, 00:13:20.521 { 00:13:20.521 "name": "BaseBdev3", 00:13:20.521 "uuid": "bee92e7a-d90e-4807-b3d2-6d6183c020a4", 00:13:20.521 "is_configured": true, 00:13:20.521 "data_offset": 0, 00:13:20.521 "data_size": 65536 00:13:20.521 } 00:13:20.521 ] 00:13:20.521 }' 00:13:20.521 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.521 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.088 [2024-11-27 14:12:51.797519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.088 "name": "Existed_Raid", 00:13:21.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.088 "strip_size_kb": 64, 00:13:21.088 "state": "configuring", 00:13:21.088 "raid_level": "concat", 00:13:21.088 "superblock": false, 00:13:21.088 "num_base_bdevs": 3, 00:13:21.088 "num_base_bdevs_discovered": 1, 00:13:21.088 "num_base_bdevs_operational": 3, 00:13:21.088 "base_bdevs_list": [ 00:13:21.088 { 00:13:21.088 "name": null, 00:13:21.088 "uuid": "b0fa0a81-4edc-4240-97d9-6ac17e7685d1", 00:13:21.088 "is_configured": false, 00:13:21.088 "data_offset": 0, 00:13:21.088 "data_size": 65536 00:13:21.088 }, 00:13:21.088 { 00:13:21.088 "name": null, 00:13:21.088 "uuid": "c07c3b84-9ee7-4040-b39b-15b02a12da8e", 00:13:21.088 "is_configured": false, 00:13:21.088 "data_offset": 0, 00:13:21.088 "data_size": 65536 00:13:21.088 }, 00:13:21.088 { 00:13:21.088 "name": "BaseBdev3", 00:13:21.088 "uuid": "bee92e7a-d90e-4807-b3d2-6d6183c020a4", 00:13:21.088 "is_configured": true, 00:13:21.088 "data_offset": 0, 00:13:21.088 "data_size": 65536 00:13:21.088 } 00:13:21.088 ] 00:13:21.088 }' 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.088 14:12:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.657 [2024-11-27 14:12:52.421846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.657 "name": "Existed_Raid", 00:13:21.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.657 "strip_size_kb": 64, 00:13:21.657 "state": "configuring", 00:13:21.657 "raid_level": "concat", 00:13:21.657 "superblock": false, 00:13:21.657 "num_base_bdevs": 3, 00:13:21.657 "num_base_bdevs_discovered": 2, 00:13:21.657 "num_base_bdevs_operational": 3, 00:13:21.657 "base_bdevs_list": [ 00:13:21.657 { 00:13:21.657 "name": null, 00:13:21.657 "uuid": "b0fa0a81-4edc-4240-97d9-6ac17e7685d1", 00:13:21.657 "is_configured": false, 00:13:21.657 "data_offset": 0, 00:13:21.657 "data_size": 65536 00:13:21.657 }, 00:13:21.657 { 00:13:21.657 "name": "BaseBdev2", 00:13:21.657 "uuid": "c07c3b84-9ee7-4040-b39b-15b02a12da8e", 00:13:21.657 "is_configured": true, 00:13:21.657 "data_offset": 0, 00:13:21.657 "data_size": 65536 00:13:21.657 }, 00:13:21.657 { 00:13:21.657 "name": "BaseBdev3", 00:13:21.657 "uuid": "bee92e7a-d90e-4807-b3d2-6d6183c020a4", 00:13:21.657 "is_configured": true, 00:13:21.657 "data_offset": 0, 00:13:21.657 "data_size": 65536 00:13:21.657 } 00:13:21.657 ] 00:13:21.657 }' 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.657 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b0fa0a81-4edc-4240-97d9-6ac17e7685d1 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.226 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.226 [2024-11-27 14:12:53.006692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:22.226 [2024-11-27 14:12:53.006805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:22.226 [2024-11-27 14:12:53.006833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:22.226 [2024-11-27 14:12:53.007117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:22.226 [2024-11-27 14:12:53.007340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:22.226 [2024-11-27 14:12:53.007384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:22.226 [2024-11-27 14:12:53.007695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.226 NewBaseBdev 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.226 [ 00:13:22.226 { 00:13:22.226 "name": "NewBaseBdev", 00:13:22.226 "aliases": [ 00:13:22.226 "b0fa0a81-4edc-4240-97d9-6ac17e7685d1" 00:13:22.226 ], 00:13:22.226 "product_name": "Malloc disk", 00:13:22.226 "block_size": 512, 00:13:22.226 "num_blocks": 65536, 00:13:22.226 "uuid": "b0fa0a81-4edc-4240-97d9-6ac17e7685d1", 00:13:22.226 "assigned_rate_limits": { 00:13:22.226 "rw_ios_per_sec": 0, 00:13:22.226 "rw_mbytes_per_sec": 0, 00:13:22.226 "r_mbytes_per_sec": 0, 00:13:22.226 "w_mbytes_per_sec": 0 00:13:22.226 }, 00:13:22.226 "claimed": true, 00:13:22.226 "claim_type": "exclusive_write", 00:13:22.226 "zoned": false, 00:13:22.226 "supported_io_types": { 00:13:22.226 "read": true, 00:13:22.226 "write": true, 00:13:22.226 "unmap": true, 00:13:22.226 "flush": true, 00:13:22.226 "reset": true, 00:13:22.226 "nvme_admin": false, 00:13:22.226 "nvme_io": false, 00:13:22.226 "nvme_io_md": false, 00:13:22.226 "write_zeroes": true, 00:13:22.226 "zcopy": true, 00:13:22.226 "get_zone_info": false, 00:13:22.226 "zone_management": false, 00:13:22.226 "zone_append": false, 00:13:22.226 "compare": false, 00:13:22.226 "compare_and_write": false, 00:13:22.226 "abort": true, 00:13:22.226 "seek_hole": false, 00:13:22.226 "seek_data": false, 00:13:22.226 "copy": true, 00:13:22.226 "nvme_iov_md": false 00:13:22.226 }, 00:13:22.226 "memory_domains": [ 00:13:22.226 { 00:13:22.226 "dma_device_id": "system", 00:13:22.226 "dma_device_type": 1 00:13:22.226 }, 00:13:22.226 { 00:13:22.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.226 "dma_device_type": 2 00:13:22.226 } 00:13:22.226 ], 00:13:22.226 "driver_specific": {} 00:13:22.226 } 00:13:22.226 ] 00:13:22.226 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.227 "name": "Existed_Raid", 00:13:22.227 "uuid": "df77969e-b73f-41f1-b5bf-0a721a92a9bb", 00:13:22.227 "strip_size_kb": 64, 00:13:22.227 "state": "online", 00:13:22.227 "raid_level": "concat", 00:13:22.227 "superblock": false, 00:13:22.227 "num_base_bdevs": 3, 00:13:22.227 "num_base_bdevs_discovered": 3, 00:13:22.227 "num_base_bdevs_operational": 3, 00:13:22.227 "base_bdevs_list": [ 00:13:22.227 { 00:13:22.227 "name": "NewBaseBdev", 00:13:22.227 "uuid": "b0fa0a81-4edc-4240-97d9-6ac17e7685d1", 00:13:22.227 "is_configured": true, 00:13:22.227 "data_offset": 0, 00:13:22.227 "data_size": 65536 00:13:22.227 }, 00:13:22.227 { 00:13:22.227 "name": "BaseBdev2", 00:13:22.227 "uuid": "c07c3b84-9ee7-4040-b39b-15b02a12da8e", 00:13:22.227 "is_configured": true, 00:13:22.227 "data_offset": 0, 00:13:22.227 "data_size": 65536 00:13:22.227 }, 00:13:22.227 { 00:13:22.227 "name": "BaseBdev3", 00:13:22.227 "uuid": "bee92e7a-d90e-4807-b3d2-6d6183c020a4", 00:13:22.227 "is_configured": true, 00:13:22.227 "data_offset": 0, 00:13:22.227 "data_size": 65536 00:13:22.227 } 00:13:22.227 ] 00:13:22.227 }' 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.227 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:22.797 [2024-11-27 14:12:53.522229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.797 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:22.797 "name": "Existed_Raid", 00:13:22.797 "aliases": [ 00:13:22.797 "df77969e-b73f-41f1-b5bf-0a721a92a9bb" 00:13:22.797 ], 00:13:22.797 "product_name": "Raid Volume", 00:13:22.797 "block_size": 512, 00:13:22.797 "num_blocks": 196608, 00:13:22.797 "uuid": "df77969e-b73f-41f1-b5bf-0a721a92a9bb", 00:13:22.797 "assigned_rate_limits": { 00:13:22.797 "rw_ios_per_sec": 0, 00:13:22.797 "rw_mbytes_per_sec": 0, 00:13:22.797 "r_mbytes_per_sec": 0, 00:13:22.797 "w_mbytes_per_sec": 0 00:13:22.797 }, 00:13:22.797 "claimed": false, 00:13:22.797 "zoned": false, 00:13:22.797 "supported_io_types": { 00:13:22.797 "read": true, 00:13:22.797 "write": true, 00:13:22.797 "unmap": true, 00:13:22.797 "flush": true, 00:13:22.797 "reset": true, 00:13:22.797 "nvme_admin": false, 00:13:22.797 "nvme_io": false, 00:13:22.797 "nvme_io_md": false, 00:13:22.797 "write_zeroes": true, 00:13:22.797 "zcopy": false, 00:13:22.797 "get_zone_info": false, 00:13:22.797 "zone_management": false, 00:13:22.797 "zone_append": false, 00:13:22.797 "compare": false, 00:13:22.797 "compare_and_write": false, 00:13:22.797 "abort": false, 00:13:22.797 "seek_hole": false, 00:13:22.797 "seek_data": false, 00:13:22.797 "copy": false, 00:13:22.797 "nvme_iov_md": false 00:13:22.797 }, 00:13:22.797 "memory_domains": [ 00:13:22.797 { 00:13:22.798 "dma_device_id": "system", 00:13:22.798 "dma_device_type": 1 00:13:22.798 }, 00:13:22.798 { 00:13:22.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.798 "dma_device_type": 2 00:13:22.798 }, 00:13:22.798 { 00:13:22.798 "dma_device_id": "system", 00:13:22.798 "dma_device_type": 1 00:13:22.798 }, 00:13:22.798 { 00:13:22.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.798 "dma_device_type": 2 00:13:22.798 }, 00:13:22.798 { 00:13:22.798 "dma_device_id": "system", 00:13:22.798 "dma_device_type": 1 00:13:22.798 }, 00:13:22.798 { 00:13:22.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.798 "dma_device_type": 2 00:13:22.798 } 00:13:22.798 ], 00:13:22.798 "driver_specific": { 00:13:22.798 "raid": { 00:13:22.798 "uuid": "df77969e-b73f-41f1-b5bf-0a721a92a9bb", 00:13:22.798 "strip_size_kb": 64, 00:13:22.798 "state": "online", 00:13:22.798 "raid_level": "concat", 00:13:22.798 "superblock": false, 00:13:22.798 "num_base_bdevs": 3, 00:13:22.798 "num_base_bdevs_discovered": 3, 00:13:22.798 "num_base_bdevs_operational": 3, 00:13:22.798 "base_bdevs_list": [ 00:13:22.798 { 00:13:22.798 "name": "NewBaseBdev", 00:13:22.798 "uuid": "b0fa0a81-4edc-4240-97d9-6ac17e7685d1", 00:13:22.798 "is_configured": true, 00:13:22.798 "data_offset": 0, 00:13:22.798 "data_size": 65536 00:13:22.798 }, 00:13:22.798 { 00:13:22.798 "name": "BaseBdev2", 00:13:22.798 "uuid": "c07c3b84-9ee7-4040-b39b-15b02a12da8e", 00:13:22.798 "is_configured": true, 00:13:22.798 "data_offset": 0, 00:13:22.798 "data_size": 65536 00:13:22.798 }, 00:13:22.798 { 00:13:22.798 "name": "BaseBdev3", 00:13:22.798 "uuid": "bee92e7a-d90e-4807-b3d2-6d6183c020a4", 00:13:22.798 "is_configured": true, 00:13:22.798 "data_offset": 0, 00:13:22.798 "data_size": 65536 00:13:22.798 } 00:13:22.798 ] 00:13:22.798 } 00:13:22.798 } 00:13:22.798 }' 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:22.798 BaseBdev2 00:13:22.798 BaseBdev3' 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.798 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.057 [2024-11-27 14:12:53.793415] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.057 [2024-11-27 14:12:53.793446] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.057 [2024-11-27 14:12:53.793539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.057 [2024-11-27 14:12:53.793602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.057 [2024-11-27 14:12:53.793615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65803 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65803 ']' 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65803 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65803 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65803' 00:13:23.057 killing process with pid 65803 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65803 00:13:23.057 [2024-11-27 14:12:53.837065] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.057 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65803 00:13:23.314 [2024-11-27 14:12:54.181740] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:24.694 00:13:24.694 real 0m11.036s 00:13:24.694 user 0m17.512s 00:13:24.694 sys 0m1.879s 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.694 ************************************ 00:13:24.694 END TEST raid_state_function_test 00:13:24.694 ************************************ 00:13:24.694 14:12:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:13:24.694 14:12:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:24.694 14:12:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.694 14:12:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:24.694 ************************************ 00:13:24.694 START TEST raid_state_function_test_sb 00:13:24.694 ************************************ 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66431 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66431' 00:13:24.694 Process raid pid: 66431 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66431 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66431 ']' 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.694 14:12:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.695 [2024-11-27 14:12:55.528342] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:24.695 [2024-11-27 14:12:55.528565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.955 [2024-11-27 14:12:55.705615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.955 [2024-11-27 14:12:55.827646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.216 [2024-11-27 14:12:56.045539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.216 [2024-11-27 14:12:56.045684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.476 [2024-11-27 14:12:56.401591] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.476 [2024-11-27 14:12:56.401715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.476 [2024-11-27 14:12:56.401748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:25.476 [2024-11-27 14:12:56.401772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:25.476 [2024-11-27 14:12:56.401791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:25.476 [2024-11-27 14:12:56.401812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.476 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.736 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.736 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.736 "name": "Existed_Raid", 00:13:25.736 "uuid": "b304d719-92a2-4be7-8ab3-218bbe0c9df4", 00:13:25.736 "strip_size_kb": 64, 00:13:25.736 "state": "configuring", 00:13:25.736 "raid_level": "concat", 00:13:25.736 "superblock": true, 00:13:25.736 "num_base_bdevs": 3, 00:13:25.736 "num_base_bdevs_discovered": 0, 00:13:25.736 "num_base_bdevs_operational": 3, 00:13:25.736 "base_bdevs_list": [ 00:13:25.736 { 00:13:25.736 "name": "BaseBdev1", 00:13:25.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.736 "is_configured": false, 00:13:25.736 "data_offset": 0, 00:13:25.736 "data_size": 0 00:13:25.736 }, 00:13:25.736 { 00:13:25.736 "name": "BaseBdev2", 00:13:25.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.736 "is_configured": false, 00:13:25.736 "data_offset": 0, 00:13:25.736 "data_size": 0 00:13:25.736 }, 00:13:25.736 { 00:13:25.736 "name": "BaseBdev3", 00:13:25.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.736 "is_configured": false, 00:13:25.736 "data_offset": 0, 00:13:25.736 "data_size": 0 00:13:25.736 } 00:13:25.736 ] 00:13:25.736 }' 00:13:25.736 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.736 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 [2024-11-27 14:12:56.848779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:25.996 [2024-11-27 14:12:56.848879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 [2024-11-27 14:12:56.856782] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.996 [2024-11-27 14:12:56.856871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.996 [2024-11-27 14:12:56.856911] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:25.996 [2024-11-27 14:12:56.856948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:25.996 [2024-11-27 14:12:56.856977] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:25.996 [2024-11-27 14:12:56.857003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 [2024-11-27 14:12:56.906777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.996 BaseBdev1 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.996 [ 00:13:25.996 { 00:13:25.996 "name": "BaseBdev1", 00:13:25.996 "aliases": [ 00:13:25.996 "d5856306-6622-4cd8-b071-9262ddd326f4" 00:13:25.996 ], 00:13:25.996 "product_name": "Malloc disk", 00:13:25.996 "block_size": 512, 00:13:25.996 "num_blocks": 65536, 00:13:25.996 "uuid": "d5856306-6622-4cd8-b071-9262ddd326f4", 00:13:25.996 "assigned_rate_limits": { 00:13:25.996 "rw_ios_per_sec": 0, 00:13:25.996 "rw_mbytes_per_sec": 0, 00:13:25.996 "r_mbytes_per_sec": 0, 00:13:25.996 "w_mbytes_per_sec": 0 00:13:25.996 }, 00:13:25.996 "claimed": true, 00:13:25.996 "claim_type": "exclusive_write", 00:13:25.996 "zoned": false, 00:13:25.996 "supported_io_types": { 00:13:25.996 "read": true, 00:13:25.996 "write": true, 00:13:25.996 "unmap": true, 00:13:25.996 "flush": true, 00:13:25.996 "reset": true, 00:13:25.996 "nvme_admin": false, 00:13:25.996 "nvme_io": false, 00:13:25.996 "nvme_io_md": false, 00:13:25.996 "write_zeroes": true, 00:13:25.996 "zcopy": true, 00:13:25.996 "get_zone_info": false, 00:13:25.996 "zone_management": false, 00:13:25.996 "zone_append": false, 00:13:25.996 "compare": false, 00:13:25.996 "compare_and_write": false, 00:13:25.996 "abort": true, 00:13:25.996 "seek_hole": false, 00:13:25.996 "seek_data": false, 00:13:25.996 "copy": true, 00:13:25.996 "nvme_iov_md": false 00:13:25.996 }, 00:13:25.996 "memory_domains": [ 00:13:25.996 { 00:13:25.996 "dma_device_id": "system", 00:13:25.996 "dma_device_type": 1 00:13:25.996 }, 00:13:25.996 { 00:13:25.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.996 "dma_device_type": 2 00:13:25.996 } 00:13:25.996 ], 00:13:25.996 "driver_specific": {} 00:13:25.996 } 00:13:25.996 ] 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.996 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.255 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.255 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.255 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.255 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.255 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.255 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.255 "name": "Existed_Raid", 00:13:26.255 "uuid": "f0403652-d618-4850-9f32-8b1b5b98ad78", 00:13:26.255 "strip_size_kb": 64, 00:13:26.255 "state": "configuring", 00:13:26.255 "raid_level": "concat", 00:13:26.255 "superblock": true, 00:13:26.255 "num_base_bdevs": 3, 00:13:26.255 "num_base_bdevs_discovered": 1, 00:13:26.255 "num_base_bdevs_operational": 3, 00:13:26.255 "base_bdevs_list": [ 00:13:26.255 { 00:13:26.255 "name": "BaseBdev1", 00:13:26.255 "uuid": "d5856306-6622-4cd8-b071-9262ddd326f4", 00:13:26.255 "is_configured": true, 00:13:26.255 "data_offset": 2048, 00:13:26.255 "data_size": 63488 00:13:26.255 }, 00:13:26.255 { 00:13:26.255 "name": "BaseBdev2", 00:13:26.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.255 "is_configured": false, 00:13:26.256 "data_offset": 0, 00:13:26.256 "data_size": 0 00:13:26.256 }, 00:13:26.256 { 00:13:26.256 "name": "BaseBdev3", 00:13:26.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.256 "is_configured": false, 00:13:26.256 "data_offset": 0, 00:13:26.256 "data_size": 0 00:13:26.256 } 00:13:26.256 ] 00:13:26.256 }' 00:13:26.256 14:12:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.256 14:12:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.515 [2024-11-27 14:12:57.398003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.515 [2024-11-27 14:12:57.398146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.515 [2024-11-27 14:12:57.410044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.515 [2024-11-27 14:12:57.411978] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.515 [2024-11-27 14:12:57.412077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.515 [2024-11-27 14:12:57.412138] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.515 [2024-11-27 14:12:57.412166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.515 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.516 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.516 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.516 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.516 "name": "Existed_Raid", 00:13:26.516 "uuid": "5e0accd4-ea84-4adc-9f85-6575501fc6f0", 00:13:26.516 "strip_size_kb": 64, 00:13:26.516 "state": "configuring", 00:13:26.516 "raid_level": "concat", 00:13:26.516 "superblock": true, 00:13:26.516 "num_base_bdevs": 3, 00:13:26.516 "num_base_bdevs_discovered": 1, 00:13:26.516 "num_base_bdevs_operational": 3, 00:13:26.516 "base_bdevs_list": [ 00:13:26.516 { 00:13:26.516 "name": "BaseBdev1", 00:13:26.516 "uuid": "d5856306-6622-4cd8-b071-9262ddd326f4", 00:13:26.516 "is_configured": true, 00:13:26.516 "data_offset": 2048, 00:13:26.516 "data_size": 63488 00:13:26.516 }, 00:13:26.516 { 00:13:26.516 "name": "BaseBdev2", 00:13:26.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.516 "is_configured": false, 00:13:26.516 "data_offset": 0, 00:13:26.516 "data_size": 0 00:13:26.516 }, 00:13:26.516 { 00:13:26.516 "name": "BaseBdev3", 00:13:26.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.516 "is_configured": false, 00:13:26.516 "data_offset": 0, 00:13:26.516 "data_size": 0 00:13:26.516 } 00:13:26.516 ] 00:13:26.516 }' 00:13:26.516 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.516 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.085 [2024-11-27 14:12:57.897949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.085 BaseBdev2 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.085 [ 00:13:27.085 { 00:13:27.085 "name": "BaseBdev2", 00:13:27.085 "aliases": [ 00:13:27.085 "1c0ae26f-8ff2-4c1d-948d-afc97327aaf2" 00:13:27.085 ], 00:13:27.085 "product_name": "Malloc disk", 00:13:27.085 "block_size": 512, 00:13:27.085 "num_blocks": 65536, 00:13:27.085 "uuid": "1c0ae26f-8ff2-4c1d-948d-afc97327aaf2", 00:13:27.085 "assigned_rate_limits": { 00:13:27.085 "rw_ios_per_sec": 0, 00:13:27.085 "rw_mbytes_per_sec": 0, 00:13:27.085 "r_mbytes_per_sec": 0, 00:13:27.085 "w_mbytes_per_sec": 0 00:13:27.085 }, 00:13:27.085 "claimed": true, 00:13:27.085 "claim_type": "exclusive_write", 00:13:27.085 "zoned": false, 00:13:27.085 "supported_io_types": { 00:13:27.085 "read": true, 00:13:27.085 "write": true, 00:13:27.085 "unmap": true, 00:13:27.085 "flush": true, 00:13:27.085 "reset": true, 00:13:27.085 "nvme_admin": false, 00:13:27.085 "nvme_io": false, 00:13:27.085 "nvme_io_md": false, 00:13:27.085 "write_zeroes": true, 00:13:27.085 "zcopy": true, 00:13:27.085 "get_zone_info": false, 00:13:27.085 "zone_management": false, 00:13:27.085 "zone_append": false, 00:13:27.085 "compare": false, 00:13:27.085 "compare_and_write": false, 00:13:27.085 "abort": true, 00:13:27.085 "seek_hole": false, 00:13:27.085 "seek_data": false, 00:13:27.085 "copy": true, 00:13:27.085 "nvme_iov_md": false 00:13:27.085 }, 00:13:27.085 "memory_domains": [ 00:13:27.085 { 00:13:27.085 "dma_device_id": "system", 00:13:27.085 "dma_device_type": 1 00:13:27.085 }, 00:13:27.085 { 00:13:27.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.085 "dma_device_type": 2 00:13:27.085 } 00:13:27.085 ], 00:13:27.085 "driver_specific": {} 00:13:27.085 } 00:13:27.085 ] 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.085 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.086 "name": "Existed_Raid", 00:13:27.086 "uuid": "5e0accd4-ea84-4adc-9f85-6575501fc6f0", 00:13:27.086 "strip_size_kb": 64, 00:13:27.086 "state": "configuring", 00:13:27.086 "raid_level": "concat", 00:13:27.086 "superblock": true, 00:13:27.086 "num_base_bdevs": 3, 00:13:27.086 "num_base_bdevs_discovered": 2, 00:13:27.086 "num_base_bdevs_operational": 3, 00:13:27.086 "base_bdevs_list": [ 00:13:27.086 { 00:13:27.086 "name": "BaseBdev1", 00:13:27.086 "uuid": "d5856306-6622-4cd8-b071-9262ddd326f4", 00:13:27.086 "is_configured": true, 00:13:27.086 "data_offset": 2048, 00:13:27.086 "data_size": 63488 00:13:27.086 }, 00:13:27.086 { 00:13:27.086 "name": "BaseBdev2", 00:13:27.086 "uuid": "1c0ae26f-8ff2-4c1d-948d-afc97327aaf2", 00:13:27.086 "is_configured": true, 00:13:27.086 "data_offset": 2048, 00:13:27.086 "data_size": 63488 00:13:27.086 }, 00:13:27.086 { 00:13:27.086 "name": "BaseBdev3", 00:13:27.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.086 "is_configured": false, 00:13:27.086 "data_offset": 0, 00:13:27.086 "data_size": 0 00:13:27.086 } 00:13:27.086 ] 00:13:27.086 }' 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.086 14:12:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.662 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.663 [2024-11-27 14:12:58.430969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.663 [2024-11-27 14:12:58.431421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:27.663 [2024-11-27 14:12:58.431492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:27.663 [2024-11-27 14:12:58.431815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:27.663 [2024-11-27 14:12:58.432045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:27.663 BaseBdev3 00:13:27.663 [2024-11-27 14:12:58.432110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:27.663 [2024-11-27 14:12:58.432348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.663 [ 00:13:27.663 { 00:13:27.663 "name": "BaseBdev3", 00:13:27.663 "aliases": [ 00:13:27.663 "368a2f91-0c95-4551-b9ab-6c9e686d891f" 00:13:27.663 ], 00:13:27.663 "product_name": "Malloc disk", 00:13:27.663 "block_size": 512, 00:13:27.663 "num_blocks": 65536, 00:13:27.663 "uuid": "368a2f91-0c95-4551-b9ab-6c9e686d891f", 00:13:27.663 "assigned_rate_limits": { 00:13:27.663 "rw_ios_per_sec": 0, 00:13:27.663 "rw_mbytes_per_sec": 0, 00:13:27.663 "r_mbytes_per_sec": 0, 00:13:27.663 "w_mbytes_per_sec": 0 00:13:27.663 }, 00:13:27.663 "claimed": true, 00:13:27.663 "claim_type": "exclusive_write", 00:13:27.663 "zoned": false, 00:13:27.663 "supported_io_types": { 00:13:27.663 "read": true, 00:13:27.663 "write": true, 00:13:27.663 "unmap": true, 00:13:27.663 "flush": true, 00:13:27.663 "reset": true, 00:13:27.663 "nvme_admin": false, 00:13:27.663 "nvme_io": false, 00:13:27.663 "nvme_io_md": false, 00:13:27.663 "write_zeroes": true, 00:13:27.663 "zcopy": true, 00:13:27.663 "get_zone_info": false, 00:13:27.663 "zone_management": false, 00:13:27.663 "zone_append": false, 00:13:27.663 "compare": false, 00:13:27.663 "compare_and_write": false, 00:13:27.663 "abort": true, 00:13:27.663 "seek_hole": false, 00:13:27.663 "seek_data": false, 00:13:27.663 "copy": true, 00:13:27.663 "nvme_iov_md": false 00:13:27.663 }, 00:13:27.663 "memory_domains": [ 00:13:27.663 { 00:13:27.663 "dma_device_id": "system", 00:13:27.663 "dma_device_type": 1 00:13:27.663 }, 00:13:27.663 { 00:13:27.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.663 "dma_device_type": 2 00:13:27.663 } 00:13:27.663 ], 00:13:27.663 "driver_specific": {} 00:13:27.663 } 00:13:27.663 ] 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.663 "name": "Existed_Raid", 00:13:27.663 "uuid": "5e0accd4-ea84-4adc-9f85-6575501fc6f0", 00:13:27.663 "strip_size_kb": 64, 00:13:27.663 "state": "online", 00:13:27.663 "raid_level": "concat", 00:13:27.663 "superblock": true, 00:13:27.663 "num_base_bdevs": 3, 00:13:27.663 "num_base_bdevs_discovered": 3, 00:13:27.663 "num_base_bdevs_operational": 3, 00:13:27.663 "base_bdevs_list": [ 00:13:27.663 { 00:13:27.663 "name": "BaseBdev1", 00:13:27.663 "uuid": "d5856306-6622-4cd8-b071-9262ddd326f4", 00:13:27.663 "is_configured": true, 00:13:27.663 "data_offset": 2048, 00:13:27.663 "data_size": 63488 00:13:27.663 }, 00:13:27.663 { 00:13:27.663 "name": "BaseBdev2", 00:13:27.663 "uuid": "1c0ae26f-8ff2-4c1d-948d-afc97327aaf2", 00:13:27.663 "is_configured": true, 00:13:27.663 "data_offset": 2048, 00:13:27.663 "data_size": 63488 00:13:27.663 }, 00:13:27.663 { 00:13:27.663 "name": "BaseBdev3", 00:13:27.663 "uuid": "368a2f91-0c95-4551-b9ab-6c9e686d891f", 00:13:27.663 "is_configured": true, 00:13:27.663 "data_offset": 2048, 00:13:27.663 "data_size": 63488 00:13:27.663 } 00:13:27.663 ] 00:13:27.663 }' 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.663 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.250 [2024-11-27 14:12:58.938514] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.250 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:28.250 "name": "Existed_Raid", 00:13:28.250 "aliases": [ 00:13:28.250 "5e0accd4-ea84-4adc-9f85-6575501fc6f0" 00:13:28.250 ], 00:13:28.250 "product_name": "Raid Volume", 00:13:28.250 "block_size": 512, 00:13:28.250 "num_blocks": 190464, 00:13:28.250 "uuid": "5e0accd4-ea84-4adc-9f85-6575501fc6f0", 00:13:28.250 "assigned_rate_limits": { 00:13:28.250 "rw_ios_per_sec": 0, 00:13:28.250 "rw_mbytes_per_sec": 0, 00:13:28.250 "r_mbytes_per_sec": 0, 00:13:28.250 "w_mbytes_per_sec": 0 00:13:28.250 }, 00:13:28.250 "claimed": false, 00:13:28.250 "zoned": false, 00:13:28.250 "supported_io_types": { 00:13:28.250 "read": true, 00:13:28.250 "write": true, 00:13:28.250 "unmap": true, 00:13:28.250 "flush": true, 00:13:28.250 "reset": true, 00:13:28.250 "nvme_admin": false, 00:13:28.250 "nvme_io": false, 00:13:28.250 "nvme_io_md": false, 00:13:28.250 "write_zeroes": true, 00:13:28.250 "zcopy": false, 00:13:28.250 "get_zone_info": false, 00:13:28.250 "zone_management": false, 00:13:28.250 "zone_append": false, 00:13:28.250 "compare": false, 00:13:28.250 "compare_and_write": false, 00:13:28.250 "abort": false, 00:13:28.250 "seek_hole": false, 00:13:28.250 "seek_data": false, 00:13:28.250 "copy": false, 00:13:28.250 "nvme_iov_md": false 00:13:28.250 }, 00:13:28.250 "memory_domains": [ 00:13:28.250 { 00:13:28.250 "dma_device_id": "system", 00:13:28.250 "dma_device_type": 1 00:13:28.250 }, 00:13:28.250 { 00:13:28.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.250 "dma_device_type": 2 00:13:28.250 }, 00:13:28.250 { 00:13:28.250 "dma_device_id": "system", 00:13:28.250 "dma_device_type": 1 00:13:28.250 }, 00:13:28.250 { 00:13:28.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.250 "dma_device_type": 2 00:13:28.250 }, 00:13:28.250 { 00:13:28.250 "dma_device_id": "system", 00:13:28.250 "dma_device_type": 1 00:13:28.250 }, 00:13:28.250 { 00:13:28.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.250 "dma_device_type": 2 00:13:28.250 } 00:13:28.250 ], 00:13:28.250 "driver_specific": { 00:13:28.250 "raid": { 00:13:28.250 "uuid": "5e0accd4-ea84-4adc-9f85-6575501fc6f0", 00:13:28.251 "strip_size_kb": 64, 00:13:28.251 "state": "online", 00:13:28.251 "raid_level": "concat", 00:13:28.251 "superblock": true, 00:13:28.251 "num_base_bdevs": 3, 00:13:28.251 "num_base_bdevs_discovered": 3, 00:13:28.251 "num_base_bdevs_operational": 3, 00:13:28.251 "base_bdevs_list": [ 00:13:28.251 { 00:13:28.251 "name": "BaseBdev1", 00:13:28.251 "uuid": "d5856306-6622-4cd8-b071-9262ddd326f4", 00:13:28.251 "is_configured": true, 00:13:28.251 "data_offset": 2048, 00:13:28.251 "data_size": 63488 00:13:28.251 }, 00:13:28.251 { 00:13:28.251 "name": "BaseBdev2", 00:13:28.251 "uuid": "1c0ae26f-8ff2-4c1d-948d-afc97327aaf2", 00:13:28.251 "is_configured": true, 00:13:28.251 "data_offset": 2048, 00:13:28.251 "data_size": 63488 00:13:28.251 }, 00:13:28.251 { 00:13:28.251 "name": "BaseBdev3", 00:13:28.251 "uuid": "368a2f91-0c95-4551-b9ab-6c9e686d891f", 00:13:28.251 "is_configured": true, 00:13:28.251 "data_offset": 2048, 00:13:28.251 "data_size": 63488 00:13:28.251 } 00:13:28.251 ] 00:13:28.251 } 00:13:28.251 } 00:13:28.251 }' 00:13:28.251 14:12:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:28.251 BaseBdev2 00:13:28.251 BaseBdev3' 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.251 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.510 [2024-11-27 14:12:59.225766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.510 [2024-11-27 14:12:59.225868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.510 [2024-11-27 14:12:59.225934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.510 "name": "Existed_Raid", 00:13:28.510 "uuid": "5e0accd4-ea84-4adc-9f85-6575501fc6f0", 00:13:28.510 "strip_size_kb": 64, 00:13:28.510 "state": "offline", 00:13:28.510 "raid_level": "concat", 00:13:28.510 "superblock": true, 00:13:28.510 "num_base_bdevs": 3, 00:13:28.510 "num_base_bdevs_discovered": 2, 00:13:28.510 "num_base_bdevs_operational": 2, 00:13:28.510 "base_bdevs_list": [ 00:13:28.510 { 00:13:28.510 "name": null, 00:13:28.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.510 "is_configured": false, 00:13:28.510 "data_offset": 0, 00:13:28.510 "data_size": 63488 00:13:28.510 }, 00:13:28.510 { 00:13:28.510 "name": "BaseBdev2", 00:13:28.510 "uuid": "1c0ae26f-8ff2-4c1d-948d-afc97327aaf2", 00:13:28.510 "is_configured": true, 00:13:28.510 "data_offset": 2048, 00:13:28.510 "data_size": 63488 00:13:28.510 }, 00:13:28.510 { 00:13:28.510 "name": "BaseBdev3", 00:13:28.510 "uuid": "368a2f91-0c95-4551-b9ab-6c9e686d891f", 00:13:28.510 "is_configured": true, 00:13:28.510 "data_offset": 2048, 00:13:28.510 "data_size": 63488 00:13:28.510 } 00:13:28.510 ] 00:13:28.510 }' 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.510 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.078 [2024-11-27 14:12:59.826551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:29.078 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:29.079 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.079 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:29.079 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.079 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.079 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.079 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:29.079 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:29.079 14:12:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:29.079 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.079 14:12:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.079 [2024-11-27 14:12:59.990734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:29.079 [2024-11-27 14:12:59.990866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.338 BaseBdev2 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:29.338 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.339 [ 00:13:29.339 { 00:13:29.339 "name": "BaseBdev2", 00:13:29.339 "aliases": [ 00:13:29.339 "bb8c7f6a-d5df-4044-aa1a-144e77aaf7c8" 00:13:29.339 ], 00:13:29.339 "product_name": "Malloc disk", 00:13:29.339 "block_size": 512, 00:13:29.339 "num_blocks": 65536, 00:13:29.339 "uuid": "bb8c7f6a-d5df-4044-aa1a-144e77aaf7c8", 00:13:29.339 "assigned_rate_limits": { 00:13:29.339 "rw_ios_per_sec": 0, 00:13:29.339 "rw_mbytes_per_sec": 0, 00:13:29.339 "r_mbytes_per_sec": 0, 00:13:29.339 "w_mbytes_per_sec": 0 00:13:29.339 }, 00:13:29.339 "claimed": false, 00:13:29.339 "zoned": false, 00:13:29.339 "supported_io_types": { 00:13:29.339 "read": true, 00:13:29.339 "write": true, 00:13:29.339 "unmap": true, 00:13:29.339 "flush": true, 00:13:29.339 "reset": true, 00:13:29.339 "nvme_admin": false, 00:13:29.339 "nvme_io": false, 00:13:29.339 "nvme_io_md": false, 00:13:29.339 "write_zeroes": true, 00:13:29.339 "zcopy": true, 00:13:29.339 "get_zone_info": false, 00:13:29.339 "zone_management": false, 00:13:29.339 "zone_append": false, 00:13:29.339 "compare": false, 00:13:29.339 "compare_and_write": false, 00:13:29.339 "abort": true, 00:13:29.339 "seek_hole": false, 00:13:29.339 "seek_data": false, 00:13:29.339 "copy": true, 00:13:29.339 "nvme_iov_md": false 00:13:29.339 }, 00:13:29.339 "memory_domains": [ 00:13:29.339 { 00:13:29.339 "dma_device_id": "system", 00:13:29.339 "dma_device_type": 1 00:13:29.339 }, 00:13:29.339 { 00:13:29.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.339 "dma_device_type": 2 00:13:29.339 } 00:13:29.339 ], 00:13:29.339 "driver_specific": {} 00:13:29.339 } 00:13:29.339 ] 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.339 BaseBdev3 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.339 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.598 [ 00:13:29.598 { 00:13:29.598 "name": "BaseBdev3", 00:13:29.598 "aliases": [ 00:13:29.598 "454c2b0b-a3e2-41c0-b5ad-1e9acd1e8abf" 00:13:29.598 ], 00:13:29.598 "product_name": "Malloc disk", 00:13:29.598 "block_size": 512, 00:13:29.598 "num_blocks": 65536, 00:13:29.598 "uuid": "454c2b0b-a3e2-41c0-b5ad-1e9acd1e8abf", 00:13:29.598 "assigned_rate_limits": { 00:13:29.598 "rw_ios_per_sec": 0, 00:13:29.598 "rw_mbytes_per_sec": 0, 00:13:29.598 "r_mbytes_per_sec": 0, 00:13:29.598 "w_mbytes_per_sec": 0 00:13:29.598 }, 00:13:29.598 "claimed": false, 00:13:29.598 "zoned": false, 00:13:29.598 "supported_io_types": { 00:13:29.598 "read": true, 00:13:29.598 "write": true, 00:13:29.598 "unmap": true, 00:13:29.598 "flush": true, 00:13:29.598 "reset": true, 00:13:29.598 "nvme_admin": false, 00:13:29.598 "nvme_io": false, 00:13:29.598 "nvme_io_md": false, 00:13:29.598 "write_zeroes": true, 00:13:29.598 "zcopy": true, 00:13:29.598 "get_zone_info": false, 00:13:29.598 "zone_management": false, 00:13:29.598 "zone_append": false, 00:13:29.598 "compare": false, 00:13:29.598 "compare_and_write": false, 00:13:29.598 "abort": true, 00:13:29.598 "seek_hole": false, 00:13:29.598 "seek_data": false, 00:13:29.598 "copy": true, 00:13:29.598 "nvme_iov_md": false 00:13:29.598 }, 00:13:29.598 "memory_domains": [ 00:13:29.598 { 00:13:29.598 "dma_device_id": "system", 00:13:29.598 "dma_device_type": 1 00:13:29.598 }, 00:13:29.598 { 00:13:29.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.598 "dma_device_type": 2 00:13:29.598 } 00:13:29.598 ], 00:13:29.598 "driver_specific": {} 00:13:29.598 } 00:13:29.598 ] 00:13:29.598 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.598 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:29.598 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:29.598 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:29.598 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:29.598 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.598 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.599 [2024-11-27 14:13:00.313393] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:29.599 [2024-11-27 14:13:00.313529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:29.599 [2024-11-27 14:13:00.313579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.599 [2024-11-27 14:13:00.315620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.599 "name": "Existed_Raid", 00:13:29.599 "uuid": "a19c2e64-e069-47eb-b478-bc1f19b82d7b", 00:13:29.599 "strip_size_kb": 64, 00:13:29.599 "state": "configuring", 00:13:29.599 "raid_level": "concat", 00:13:29.599 "superblock": true, 00:13:29.599 "num_base_bdevs": 3, 00:13:29.599 "num_base_bdevs_discovered": 2, 00:13:29.599 "num_base_bdevs_operational": 3, 00:13:29.599 "base_bdevs_list": [ 00:13:29.599 { 00:13:29.599 "name": "BaseBdev1", 00:13:29.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.599 "is_configured": false, 00:13:29.599 "data_offset": 0, 00:13:29.599 "data_size": 0 00:13:29.599 }, 00:13:29.599 { 00:13:29.599 "name": "BaseBdev2", 00:13:29.599 "uuid": "bb8c7f6a-d5df-4044-aa1a-144e77aaf7c8", 00:13:29.599 "is_configured": true, 00:13:29.599 "data_offset": 2048, 00:13:29.599 "data_size": 63488 00:13:29.599 }, 00:13:29.599 { 00:13:29.599 "name": "BaseBdev3", 00:13:29.599 "uuid": "454c2b0b-a3e2-41c0-b5ad-1e9acd1e8abf", 00:13:29.599 "is_configured": true, 00:13:29.599 "data_offset": 2048, 00:13:29.599 "data_size": 63488 00:13:29.599 } 00:13:29.599 ] 00:13:29.599 }' 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.599 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 [2024-11-27 14:13:00.788549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.859 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.118 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.118 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.118 "name": "Existed_Raid", 00:13:30.118 "uuid": "a19c2e64-e069-47eb-b478-bc1f19b82d7b", 00:13:30.118 "strip_size_kb": 64, 00:13:30.118 "state": "configuring", 00:13:30.118 "raid_level": "concat", 00:13:30.118 "superblock": true, 00:13:30.118 "num_base_bdevs": 3, 00:13:30.118 "num_base_bdevs_discovered": 1, 00:13:30.118 "num_base_bdevs_operational": 3, 00:13:30.118 "base_bdevs_list": [ 00:13:30.118 { 00:13:30.118 "name": "BaseBdev1", 00:13:30.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.118 "is_configured": false, 00:13:30.118 "data_offset": 0, 00:13:30.118 "data_size": 0 00:13:30.118 }, 00:13:30.118 { 00:13:30.118 "name": null, 00:13:30.118 "uuid": "bb8c7f6a-d5df-4044-aa1a-144e77aaf7c8", 00:13:30.118 "is_configured": false, 00:13:30.118 "data_offset": 0, 00:13:30.118 "data_size": 63488 00:13:30.118 }, 00:13:30.118 { 00:13:30.118 "name": "BaseBdev3", 00:13:30.118 "uuid": "454c2b0b-a3e2-41c0-b5ad-1e9acd1e8abf", 00:13:30.118 "is_configured": true, 00:13:30.118 "data_offset": 2048, 00:13:30.118 "data_size": 63488 00:13:30.118 } 00:13:30.118 ] 00:13:30.118 }' 00:13:30.118 14:13:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.118 14:13:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.378 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.378 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.378 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.378 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:30.378 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.378 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:30.378 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:30.378 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.378 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.639 [2024-11-27 14:13:01.337389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.639 BaseBdev1 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.639 [ 00:13:30.639 { 00:13:30.639 "name": "BaseBdev1", 00:13:30.639 "aliases": [ 00:13:30.639 "3e9afddd-d51b-4d3f-96c9-6fc5131af524" 00:13:30.639 ], 00:13:30.639 "product_name": "Malloc disk", 00:13:30.639 "block_size": 512, 00:13:30.639 "num_blocks": 65536, 00:13:30.639 "uuid": "3e9afddd-d51b-4d3f-96c9-6fc5131af524", 00:13:30.639 "assigned_rate_limits": { 00:13:30.639 "rw_ios_per_sec": 0, 00:13:30.639 "rw_mbytes_per_sec": 0, 00:13:30.639 "r_mbytes_per_sec": 0, 00:13:30.639 "w_mbytes_per_sec": 0 00:13:30.639 }, 00:13:30.639 "claimed": true, 00:13:30.639 "claim_type": "exclusive_write", 00:13:30.639 "zoned": false, 00:13:30.639 "supported_io_types": { 00:13:30.639 "read": true, 00:13:30.639 "write": true, 00:13:30.639 "unmap": true, 00:13:30.639 "flush": true, 00:13:30.639 "reset": true, 00:13:30.639 "nvme_admin": false, 00:13:30.639 "nvme_io": false, 00:13:30.639 "nvme_io_md": false, 00:13:30.639 "write_zeroes": true, 00:13:30.639 "zcopy": true, 00:13:30.639 "get_zone_info": false, 00:13:30.639 "zone_management": false, 00:13:30.639 "zone_append": false, 00:13:30.639 "compare": false, 00:13:30.639 "compare_and_write": false, 00:13:30.639 "abort": true, 00:13:30.639 "seek_hole": false, 00:13:30.639 "seek_data": false, 00:13:30.639 "copy": true, 00:13:30.639 "nvme_iov_md": false 00:13:30.639 }, 00:13:30.639 "memory_domains": [ 00:13:30.639 { 00:13:30.639 "dma_device_id": "system", 00:13:30.639 "dma_device_type": 1 00:13:30.639 }, 00:13:30.639 { 00:13:30.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.639 "dma_device_type": 2 00:13:30.639 } 00:13:30.639 ], 00:13:30.639 "driver_specific": {} 00:13:30.639 } 00:13:30.639 ] 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.639 "name": "Existed_Raid", 00:13:30.639 "uuid": "a19c2e64-e069-47eb-b478-bc1f19b82d7b", 00:13:30.639 "strip_size_kb": 64, 00:13:30.639 "state": "configuring", 00:13:30.639 "raid_level": "concat", 00:13:30.639 "superblock": true, 00:13:30.639 "num_base_bdevs": 3, 00:13:30.639 "num_base_bdevs_discovered": 2, 00:13:30.639 "num_base_bdevs_operational": 3, 00:13:30.639 "base_bdevs_list": [ 00:13:30.639 { 00:13:30.639 "name": "BaseBdev1", 00:13:30.639 "uuid": "3e9afddd-d51b-4d3f-96c9-6fc5131af524", 00:13:30.639 "is_configured": true, 00:13:30.639 "data_offset": 2048, 00:13:30.639 "data_size": 63488 00:13:30.639 }, 00:13:30.639 { 00:13:30.639 "name": null, 00:13:30.639 "uuid": "bb8c7f6a-d5df-4044-aa1a-144e77aaf7c8", 00:13:30.639 "is_configured": false, 00:13:30.639 "data_offset": 0, 00:13:30.639 "data_size": 63488 00:13:30.639 }, 00:13:30.639 { 00:13:30.639 "name": "BaseBdev3", 00:13:30.639 "uuid": "454c2b0b-a3e2-41c0-b5ad-1e9acd1e8abf", 00:13:30.639 "is_configured": true, 00:13:30.639 "data_offset": 2048, 00:13:30.639 "data_size": 63488 00:13:30.639 } 00:13:30.639 ] 00:13:30.639 }' 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.639 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.899 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.899 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.899 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.899 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:30.899 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.222 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:31.222 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:31.222 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.222 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.222 [2024-11-27 14:13:01.880585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:31.222 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.222 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:31.222 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.222 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.222 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:31.222 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.222 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.223 "name": "Existed_Raid", 00:13:31.223 "uuid": "a19c2e64-e069-47eb-b478-bc1f19b82d7b", 00:13:31.223 "strip_size_kb": 64, 00:13:31.223 "state": "configuring", 00:13:31.223 "raid_level": "concat", 00:13:31.223 "superblock": true, 00:13:31.223 "num_base_bdevs": 3, 00:13:31.223 "num_base_bdevs_discovered": 1, 00:13:31.223 "num_base_bdevs_operational": 3, 00:13:31.223 "base_bdevs_list": [ 00:13:31.223 { 00:13:31.223 "name": "BaseBdev1", 00:13:31.223 "uuid": "3e9afddd-d51b-4d3f-96c9-6fc5131af524", 00:13:31.223 "is_configured": true, 00:13:31.223 "data_offset": 2048, 00:13:31.223 "data_size": 63488 00:13:31.223 }, 00:13:31.223 { 00:13:31.223 "name": null, 00:13:31.223 "uuid": "bb8c7f6a-d5df-4044-aa1a-144e77aaf7c8", 00:13:31.223 "is_configured": false, 00:13:31.223 "data_offset": 0, 00:13:31.223 "data_size": 63488 00:13:31.223 }, 00:13:31.223 { 00:13:31.223 "name": null, 00:13:31.223 "uuid": "454c2b0b-a3e2-41c0-b5ad-1e9acd1e8abf", 00:13:31.223 "is_configured": false, 00:13:31.223 "data_offset": 0, 00:13:31.223 "data_size": 63488 00:13:31.223 } 00:13:31.223 ] 00:13:31.223 }' 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.223 14:13:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.482 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.482 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.482 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.482 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:31.482 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.482 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:31.482 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.483 [2024-11-27 14:13:02.392318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.483 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.743 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.743 "name": "Existed_Raid", 00:13:31.743 "uuid": "a19c2e64-e069-47eb-b478-bc1f19b82d7b", 00:13:31.743 "strip_size_kb": 64, 00:13:31.743 "state": "configuring", 00:13:31.743 "raid_level": "concat", 00:13:31.743 "superblock": true, 00:13:31.743 "num_base_bdevs": 3, 00:13:31.743 "num_base_bdevs_discovered": 2, 00:13:31.743 "num_base_bdevs_operational": 3, 00:13:31.743 "base_bdevs_list": [ 00:13:31.743 { 00:13:31.743 "name": "BaseBdev1", 00:13:31.743 "uuid": "3e9afddd-d51b-4d3f-96c9-6fc5131af524", 00:13:31.743 "is_configured": true, 00:13:31.743 "data_offset": 2048, 00:13:31.743 "data_size": 63488 00:13:31.743 }, 00:13:31.743 { 00:13:31.743 "name": null, 00:13:31.743 "uuid": "bb8c7f6a-d5df-4044-aa1a-144e77aaf7c8", 00:13:31.743 "is_configured": false, 00:13:31.743 "data_offset": 0, 00:13:31.743 "data_size": 63488 00:13:31.743 }, 00:13:31.743 { 00:13:31.743 "name": "BaseBdev3", 00:13:31.743 "uuid": "454c2b0b-a3e2-41c0-b5ad-1e9acd1e8abf", 00:13:31.743 "is_configured": true, 00:13:31.743 "data_offset": 2048, 00:13:31.743 "data_size": 63488 00:13:31.743 } 00:13:31.743 ] 00:13:31.743 }' 00:13:31.743 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.743 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.002 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.002 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:32.002 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.002 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.002 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.002 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:32.002 14:13:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:32.002 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.002 14:13:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.002 [2024-11-27 14:13:02.931587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.262 "name": "Existed_Raid", 00:13:32.262 "uuid": "a19c2e64-e069-47eb-b478-bc1f19b82d7b", 00:13:32.262 "strip_size_kb": 64, 00:13:32.262 "state": "configuring", 00:13:32.262 "raid_level": "concat", 00:13:32.262 "superblock": true, 00:13:32.262 "num_base_bdevs": 3, 00:13:32.262 "num_base_bdevs_discovered": 1, 00:13:32.262 "num_base_bdevs_operational": 3, 00:13:32.262 "base_bdevs_list": [ 00:13:32.262 { 00:13:32.262 "name": null, 00:13:32.262 "uuid": "3e9afddd-d51b-4d3f-96c9-6fc5131af524", 00:13:32.262 "is_configured": false, 00:13:32.262 "data_offset": 0, 00:13:32.262 "data_size": 63488 00:13:32.262 }, 00:13:32.262 { 00:13:32.262 "name": null, 00:13:32.262 "uuid": "bb8c7f6a-d5df-4044-aa1a-144e77aaf7c8", 00:13:32.262 "is_configured": false, 00:13:32.262 "data_offset": 0, 00:13:32.262 "data_size": 63488 00:13:32.262 }, 00:13:32.262 { 00:13:32.262 "name": "BaseBdev3", 00:13:32.262 "uuid": "454c2b0b-a3e2-41c0-b5ad-1e9acd1e8abf", 00:13:32.262 "is_configured": true, 00:13:32.262 "data_offset": 2048, 00:13:32.262 "data_size": 63488 00:13:32.262 } 00:13:32.262 ] 00:13:32.262 }' 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.262 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.522 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.522 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:32.522 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.522 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.782 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.782 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:32.782 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:32.782 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.782 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.782 [2024-11-27 14:13:03.508694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:32.782 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.782 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:32.782 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.783 "name": "Existed_Raid", 00:13:32.783 "uuid": "a19c2e64-e069-47eb-b478-bc1f19b82d7b", 00:13:32.783 "strip_size_kb": 64, 00:13:32.783 "state": "configuring", 00:13:32.783 "raid_level": "concat", 00:13:32.783 "superblock": true, 00:13:32.783 "num_base_bdevs": 3, 00:13:32.783 "num_base_bdevs_discovered": 2, 00:13:32.783 "num_base_bdevs_operational": 3, 00:13:32.783 "base_bdevs_list": [ 00:13:32.783 { 00:13:32.783 "name": null, 00:13:32.783 "uuid": "3e9afddd-d51b-4d3f-96c9-6fc5131af524", 00:13:32.783 "is_configured": false, 00:13:32.783 "data_offset": 0, 00:13:32.783 "data_size": 63488 00:13:32.783 }, 00:13:32.783 { 00:13:32.783 "name": "BaseBdev2", 00:13:32.783 "uuid": "bb8c7f6a-d5df-4044-aa1a-144e77aaf7c8", 00:13:32.783 "is_configured": true, 00:13:32.783 "data_offset": 2048, 00:13:32.783 "data_size": 63488 00:13:32.783 }, 00:13:32.783 { 00:13:32.783 "name": "BaseBdev3", 00:13:32.783 "uuid": "454c2b0b-a3e2-41c0-b5ad-1e9acd1e8abf", 00:13:32.783 "is_configured": true, 00:13:32.783 "data_offset": 2048, 00:13:32.783 "data_size": 63488 00:13:32.783 } 00:13:32.783 ] 00:13:32.783 }' 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.783 14:13:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3e9afddd-d51b-4d3f-96c9-6fc5131af524 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.352 [2024-11-27 14:13:04.151373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:33.352 [2024-11-27 14:13:04.151608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:33.352 [2024-11-27 14:13:04.151625] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:33.352 [2024-11-27 14:13:04.151864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:33.352 [2024-11-27 14:13:04.152017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:33.352 [2024-11-27 14:13:04.152027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:33.352 [2024-11-27 14:13:04.152224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.352 NewBaseBdev 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.352 [ 00:13:33.352 { 00:13:33.352 "name": "NewBaseBdev", 00:13:33.352 "aliases": [ 00:13:33.352 "3e9afddd-d51b-4d3f-96c9-6fc5131af524" 00:13:33.352 ], 00:13:33.352 "product_name": "Malloc disk", 00:13:33.352 "block_size": 512, 00:13:33.352 "num_blocks": 65536, 00:13:33.352 "uuid": "3e9afddd-d51b-4d3f-96c9-6fc5131af524", 00:13:33.352 "assigned_rate_limits": { 00:13:33.352 "rw_ios_per_sec": 0, 00:13:33.352 "rw_mbytes_per_sec": 0, 00:13:33.352 "r_mbytes_per_sec": 0, 00:13:33.352 "w_mbytes_per_sec": 0 00:13:33.352 }, 00:13:33.352 "claimed": true, 00:13:33.352 "claim_type": "exclusive_write", 00:13:33.352 "zoned": false, 00:13:33.352 "supported_io_types": { 00:13:33.352 "read": true, 00:13:33.352 "write": true, 00:13:33.352 "unmap": true, 00:13:33.352 "flush": true, 00:13:33.352 "reset": true, 00:13:33.352 "nvme_admin": false, 00:13:33.352 "nvme_io": false, 00:13:33.352 "nvme_io_md": false, 00:13:33.352 "write_zeroes": true, 00:13:33.352 "zcopy": true, 00:13:33.352 "get_zone_info": false, 00:13:33.352 "zone_management": false, 00:13:33.352 "zone_append": false, 00:13:33.352 "compare": false, 00:13:33.352 "compare_and_write": false, 00:13:33.352 "abort": true, 00:13:33.352 "seek_hole": false, 00:13:33.352 "seek_data": false, 00:13:33.352 "copy": true, 00:13:33.352 "nvme_iov_md": false 00:13:33.352 }, 00:13:33.352 "memory_domains": [ 00:13:33.352 { 00:13:33.352 "dma_device_id": "system", 00:13:33.352 "dma_device_type": 1 00:13:33.352 }, 00:13:33.352 { 00:13:33.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.352 "dma_device_type": 2 00:13:33.352 } 00:13:33.352 ], 00:13:33.352 "driver_specific": {} 00:13:33.352 } 00:13:33.352 ] 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.352 "name": "Existed_Raid", 00:13:33.352 "uuid": "a19c2e64-e069-47eb-b478-bc1f19b82d7b", 00:13:33.352 "strip_size_kb": 64, 00:13:33.352 "state": "online", 00:13:33.352 "raid_level": "concat", 00:13:33.352 "superblock": true, 00:13:33.352 "num_base_bdevs": 3, 00:13:33.352 "num_base_bdevs_discovered": 3, 00:13:33.352 "num_base_bdevs_operational": 3, 00:13:33.352 "base_bdevs_list": [ 00:13:33.352 { 00:13:33.352 "name": "NewBaseBdev", 00:13:33.352 "uuid": "3e9afddd-d51b-4d3f-96c9-6fc5131af524", 00:13:33.352 "is_configured": true, 00:13:33.352 "data_offset": 2048, 00:13:33.352 "data_size": 63488 00:13:33.352 }, 00:13:33.352 { 00:13:33.352 "name": "BaseBdev2", 00:13:33.352 "uuid": "bb8c7f6a-d5df-4044-aa1a-144e77aaf7c8", 00:13:33.352 "is_configured": true, 00:13:33.352 "data_offset": 2048, 00:13:33.352 "data_size": 63488 00:13:33.352 }, 00:13:33.352 { 00:13:33.352 "name": "BaseBdev3", 00:13:33.352 "uuid": "454c2b0b-a3e2-41c0-b5ad-1e9acd1e8abf", 00:13:33.352 "is_configured": true, 00:13:33.352 "data_offset": 2048, 00:13:33.352 "data_size": 63488 00:13:33.352 } 00:13:33.352 ] 00:13:33.352 }' 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.352 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.920 [2024-11-27 14:13:04.658969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:33.920 "name": "Existed_Raid", 00:13:33.920 "aliases": [ 00:13:33.920 "a19c2e64-e069-47eb-b478-bc1f19b82d7b" 00:13:33.920 ], 00:13:33.920 "product_name": "Raid Volume", 00:13:33.920 "block_size": 512, 00:13:33.920 "num_blocks": 190464, 00:13:33.920 "uuid": "a19c2e64-e069-47eb-b478-bc1f19b82d7b", 00:13:33.920 "assigned_rate_limits": { 00:13:33.920 "rw_ios_per_sec": 0, 00:13:33.920 "rw_mbytes_per_sec": 0, 00:13:33.920 "r_mbytes_per_sec": 0, 00:13:33.920 "w_mbytes_per_sec": 0 00:13:33.920 }, 00:13:33.920 "claimed": false, 00:13:33.920 "zoned": false, 00:13:33.920 "supported_io_types": { 00:13:33.920 "read": true, 00:13:33.920 "write": true, 00:13:33.920 "unmap": true, 00:13:33.920 "flush": true, 00:13:33.920 "reset": true, 00:13:33.920 "nvme_admin": false, 00:13:33.920 "nvme_io": false, 00:13:33.920 "nvme_io_md": false, 00:13:33.920 "write_zeroes": true, 00:13:33.920 "zcopy": false, 00:13:33.920 "get_zone_info": false, 00:13:33.920 "zone_management": false, 00:13:33.920 "zone_append": false, 00:13:33.920 "compare": false, 00:13:33.920 "compare_and_write": false, 00:13:33.920 "abort": false, 00:13:33.920 "seek_hole": false, 00:13:33.920 "seek_data": false, 00:13:33.920 "copy": false, 00:13:33.920 "nvme_iov_md": false 00:13:33.920 }, 00:13:33.920 "memory_domains": [ 00:13:33.920 { 00:13:33.920 "dma_device_id": "system", 00:13:33.920 "dma_device_type": 1 00:13:33.920 }, 00:13:33.920 { 00:13:33.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.920 "dma_device_type": 2 00:13:33.920 }, 00:13:33.920 { 00:13:33.920 "dma_device_id": "system", 00:13:33.920 "dma_device_type": 1 00:13:33.920 }, 00:13:33.920 { 00:13:33.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.920 "dma_device_type": 2 00:13:33.920 }, 00:13:33.920 { 00:13:33.920 "dma_device_id": "system", 00:13:33.920 "dma_device_type": 1 00:13:33.920 }, 00:13:33.920 { 00:13:33.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.920 "dma_device_type": 2 00:13:33.920 } 00:13:33.920 ], 00:13:33.920 "driver_specific": { 00:13:33.920 "raid": { 00:13:33.920 "uuid": "a19c2e64-e069-47eb-b478-bc1f19b82d7b", 00:13:33.920 "strip_size_kb": 64, 00:13:33.920 "state": "online", 00:13:33.920 "raid_level": "concat", 00:13:33.920 "superblock": true, 00:13:33.920 "num_base_bdevs": 3, 00:13:33.920 "num_base_bdevs_discovered": 3, 00:13:33.920 "num_base_bdevs_operational": 3, 00:13:33.920 "base_bdevs_list": [ 00:13:33.920 { 00:13:33.920 "name": "NewBaseBdev", 00:13:33.920 "uuid": "3e9afddd-d51b-4d3f-96c9-6fc5131af524", 00:13:33.920 "is_configured": true, 00:13:33.920 "data_offset": 2048, 00:13:33.920 "data_size": 63488 00:13:33.920 }, 00:13:33.920 { 00:13:33.920 "name": "BaseBdev2", 00:13:33.920 "uuid": "bb8c7f6a-d5df-4044-aa1a-144e77aaf7c8", 00:13:33.920 "is_configured": true, 00:13:33.920 "data_offset": 2048, 00:13:33.920 "data_size": 63488 00:13:33.920 }, 00:13:33.920 { 00:13:33.920 "name": "BaseBdev3", 00:13:33.920 "uuid": "454c2b0b-a3e2-41c0-b5ad-1e9acd1e8abf", 00:13:33.920 "is_configured": true, 00:13:33.920 "data_offset": 2048, 00:13:33.920 "data_size": 63488 00:13:33.920 } 00:13:33.920 ] 00:13:33.920 } 00:13:33.920 } 00:13:33.920 }' 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:33.920 BaseBdev2 00:13:33.920 BaseBdev3' 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.920 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.179 [2024-11-27 14:13:04.934173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:34.179 [2024-11-27 14:13:04.934261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.179 [2024-11-27 14:13:04.934382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.179 [2024-11-27 14:13:04.934444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.179 [2024-11-27 14:13:04.934457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66431 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66431 ']' 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66431 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66431 00:13:34.179 killing process with pid 66431 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66431' 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66431 00:13:34.179 [2024-11-27 14:13:04.983608] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:34.179 14:13:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66431 00:13:34.437 [2024-11-27 14:13:05.307641] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:35.813 14:13:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:35.813 00:13:35.813 real 0m11.093s 00:13:35.813 user 0m17.556s 00:13:35.813 sys 0m1.968s 00:13:35.813 14:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.813 14:13:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.813 ************************************ 00:13:35.813 END TEST raid_state_function_test_sb 00:13:35.813 ************************************ 00:13:35.813 14:13:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:13:35.813 14:13:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:35.813 14:13:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.813 14:13:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:35.813 ************************************ 00:13:35.813 START TEST raid_superblock_test 00:13:35.813 ************************************ 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67057 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67057 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67057 ']' 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.813 14:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.814 [2024-11-27 14:13:06.682298] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:35.814 [2024-11-27 14:13:06.682471] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67057 ] 00:13:36.072 [2024-11-27 14:13:06.857514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.072 [2024-11-27 14:13:06.979719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.331 [2024-11-27 14:13:07.196253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.331 [2024-11-27 14:13:07.196319] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.590 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.590 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:36.590 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:36.590 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:36.590 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:36.590 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:36.590 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:36.590 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:36.590 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:36.591 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:36.591 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:36.591 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.591 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.850 malloc1 00:13:36.850 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.850 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:36.850 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.850 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.850 [2024-11-27 14:13:07.582360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:36.850 [2024-11-27 14:13:07.582479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.851 [2024-11-27 14:13:07.582520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:36.851 [2024-11-27 14:13:07.582550] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.851 [2024-11-27 14:13:07.584775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.851 [2024-11-27 14:13:07.584848] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:36.851 pt1 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.851 malloc2 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.851 [2024-11-27 14:13:07.642322] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:36.851 [2024-11-27 14:13:07.642441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.851 [2024-11-27 14:13:07.642492] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:36.851 [2024-11-27 14:13:07.642546] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.851 [2024-11-27 14:13:07.645152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.851 [2024-11-27 14:13:07.645233] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:36.851 pt2 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.851 malloc3 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.851 [2024-11-27 14:13:07.712999] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:36.851 [2024-11-27 14:13:07.713107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.851 [2024-11-27 14:13:07.713166] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:36.851 [2024-11-27 14:13:07.713222] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.851 [2024-11-27 14:13:07.715674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.851 [2024-11-27 14:13:07.715755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:36.851 pt3 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.851 [2024-11-27 14:13:07.725044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:36.851 [2024-11-27 14:13:07.727110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.851 [2024-11-27 14:13:07.727201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:36.851 [2024-11-27 14:13:07.727377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:36.851 [2024-11-27 14:13:07.727392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:36.851 [2024-11-27 14:13:07.727670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:36.851 [2024-11-27 14:13:07.727870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:36.851 [2024-11-27 14:13:07.727885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:36.851 [2024-11-27 14:13:07.728060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.851 "name": "raid_bdev1", 00:13:36.851 "uuid": "c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a", 00:13:36.851 "strip_size_kb": 64, 00:13:36.851 "state": "online", 00:13:36.851 "raid_level": "concat", 00:13:36.851 "superblock": true, 00:13:36.851 "num_base_bdevs": 3, 00:13:36.851 "num_base_bdevs_discovered": 3, 00:13:36.851 "num_base_bdevs_operational": 3, 00:13:36.851 "base_bdevs_list": [ 00:13:36.851 { 00:13:36.851 "name": "pt1", 00:13:36.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:36.851 "is_configured": true, 00:13:36.851 "data_offset": 2048, 00:13:36.851 "data_size": 63488 00:13:36.851 }, 00:13:36.851 { 00:13:36.851 "name": "pt2", 00:13:36.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.851 "is_configured": true, 00:13:36.851 "data_offset": 2048, 00:13:36.851 "data_size": 63488 00:13:36.851 }, 00:13:36.851 { 00:13:36.851 "name": "pt3", 00:13:36.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.851 "is_configured": true, 00:13:36.851 "data_offset": 2048, 00:13:36.851 "data_size": 63488 00:13:36.851 } 00:13:36.851 ] 00:13:36.851 }' 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.851 14:13:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.421 [2024-11-27 14:13:08.204656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:37.421 "name": "raid_bdev1", 00:13:37.421 "aliases": [ 00:13:37.421 "c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a" 00:13:37.421 ], 00:13:37.421 "product_name": "Raid Volume", 00:13:37.421 "block_size": 512, 00:13:37.421 "num_blocks": 190464, 00:13:37.421 "uuid": "c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a", 00:13:37.421 "assigned_rate_limits": { 00:13:37.421 "rw_ios_per_sec": 0, 00:13:37.421 "rw_mbytes_per_sec": 0, 00:13:37.421 "r_mbytes_per_sec": 0, 00:13:37.421 "w_mbytes_per_sec": 0 00:13:37.421 }, 00:13:37.421 "claimed": false, 00:13:37.421 "zoned": false, 00:13:37.421 "supported_io_types": { 00:13:37.421 "read": true, 00:13:37.421 "write": true, 00:13:37.421 "unmap": true, 00:13:37.421 "flush": true, 00:13:37.421 "reset": true, 00:13:37.421 "nvme_admin": false, 00:13:37.421 "nvme_io": false, 00:13:37.421 "nvme_io_md": false, 00:13:37.421 "write_zeroes": true, 00:13:37.421 "zcopy": false, 00:13:37.421 "get_zone_info": false, 00:13:37.421 "zone_management": false, 00:13:37.421 "zone_append": false, 00:13:37.421 "compare": false, 00:13:37.421 "compare_and_write": false, 00:13:37.421 "abort": false, 00:13:37.421 "seek_hole": false, 00:13:37.421 "seek_data": false, 00:13:37.421 "copy": false, 00:13:37.421 "nvme_iov_md": false 00:13:37.421 }, 00:13:37.421 "memory_domains": [ 00:13:37.421 { 00:13:37.421 "dma_device_id": "system", 00:13:37.421 "dma_device_type": 1 00:13:37.421 }, 00:13:37.421 { 00:13:37.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.421 "dma_device_type": 2 00:13:37.421 }, 00:13:37.421 { 00:13:37.421 "dma_device_id": "system", 00:13:37.421 "dma_device_type": 1 00:13:37.421 }, 00:13:37.421 { 00:13:37.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.421 "dma_device_type": 2 00:13:37.421 }, 00:13:37.421 { 00:13:37.421 "dma_device_id": "system", 00:13:37.421 "dma_device_type": 1 00:13:37.421 }, 00:13:37.421 { 00:13:37.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.421 "dma_device_type": 2 00:13:37.421 } 00:13:37.421 ], 00:13:37.421 "driver_specific": { 00:13:37.421 "raid": { 00:13:37.421 "uuid": "c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a", 00:13:37.421 "strip_size_kb": 64, 00:13:37.421 "state": "online", 00:13:37.421 "raid_level": "concat", 00:13:37.421 "superblock": true, 00:13:37.421 "num_base_bdevs": 3, 00:13:37.421 "num_base_bdevs_discovered": 3, 00:13:37.421 "num_base_bdevs_operational": 3, 00:13:37.421 "base_bdevs_list": [ 00:13:37.421 { 00:13:37.421 "name": "pt1", 00:13:37.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.421 "is_configured": true, 00:13:37.421 "data_offset": 2048, 00:13:37.421 "data_size": 63488 00:13:37.421 }, 00:13:37.421 { 00:13:37.421 "name": "pt2", 00:13:37.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.421 "is_configured": true, 00:13:37.421 "data_offset": 2048, 00:13:37.421 "data_size": 63488 00:13:37.421 }, 00:13:37.421 { 00:13:37.421 "name": "pt3", 00:13:37.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.421 "is_configured": true, 00:13:37.421 "data_offset": 2048, 00:13:37.421 "data_size": 63488 00:13:37.421 } 00:13:37.421 ] 00:13:37.421 } 00:13:37.421 } 00:13:37.421 }' 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:37.421 pt2 00:13:37.421 pt3' 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.421 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.681 [2024-11-27 14:13:08.480282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a ']' 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.681 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.681 [2024-11-27 14:13:08.511863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.681 [2024-11-27 14:13:08.511892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.681 [2024-11-27 14:13:08.511977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.681 [2024-11-27 14:13:08.512038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.682 [2024-11-27 14:13:08.512047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:37.682 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.942 [2024-11-27 14:13:08.643715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:37.942 [2024-11-27 14:13:08.645750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:37.942 [2024-11-27 14:13:08.645903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:37.942 [2024-11-27 14:13:08.645979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:37.942 [2024-11-27 14:13:08.646036] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:37.942 [2024-11-27 14:13:08.646055] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:37.942 [2024-11-27 14:13:08.646071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.942 [2024-11-27 14:13:08.646082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:37.942 request: 00:13:37.942 { 00:13:37.942 "name": "raid_bdev1", 00:13:37.942 "raid_level": "concat", 00:13:37.942 "base_bdevs": [ 00:13:37.942 "malloc1", 00:13:37.942 "malloc2", 00:13:37.942 "malloc3" 00:13:37.942 ], 00:13:37.942 "strip_size_kb": 64, 00:13:37.942 "superblock": false, 00:13:37.942 "method": "bdev_raid_create", 00:13:37.942 "req_id": 1 00:13:37.942 } 00:13:37.942 Got JSON-RPC error response 00:13:37.942 response: 00:13:37.942 { 00:13:37.942 "code": -17, 00:13:37.942 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:37.942 } 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.942 [2024-11-27 14:13:08.711523] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:37.942 [2024-11-27 14:13:08.711650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.942 [2024-11-27 14:13:08.711693] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:37.942 [2024-11-27 14:13:08.711728] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.942 [2024-11-27 14:13:08.714078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.942 [2024-11-27 14:13:08.714178] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:37.942 [2024-11-27 14:13:08.714315] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:37.942 [2024-11-27 14:13:08.714406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:37.942 pt1 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.942 "name": "raid_bdev1", 00:13:37.942 "uuid": "c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a", 00:13:37.942 "strip_size_kb": 64, 00:13:37.942 "state": "configuring", 00:13:37.942 "raid_level": "concat", 00:13:37.942 "superblock": true, 00:13:37.942 "num_base_bdevs": 3, 00:13:37.942 "num_base_bdevs_discovered": 1, 00:13:37.942 "num_base_bdevs_operational": 3, 00:13:37.942 "base_bdevs_list": [ 00:13:37.942 { 00:13:37.942 "name": "pt1", 00:13:37.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:37.942 "is_configured": true, 00:13:37.942 "data_offset": 2048, 00:13:37.942 "data_size": 63488 00:13:37.942 }, 00:13:37.942 { 00:13:37.942 "name": null, 00:13:37.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.942 "is_configured": false, 00:13:37.942 "data_offset": 2048, 00:13:37.942 "data_size": 63488 00:13:37.942 }, 00:13:37.942 { 00:13:37.942 "name": null, 00:13:37.942 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.942 "is_configured": false, 00:13:37.942 "data_offset": 2048, 00:13:37.942 "data_size": 63488 00:13:37.942 } 00:13:37.942 ] 00:13:37.942 }' 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.942 14:13:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.202 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:38.202 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.202 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.202 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.202 [2024-11-27 14:13:09.122864] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.202 [2024-11-27 14:13:09.122940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.202 [2024-11-27 14:13:09.122969] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:38.202 [2024-11-27 14:13:09.122979] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.202 [2024-11-27 14:13:09.123481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.202 [2024-11-27 14:13:09.123508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:38.203 [2024-11-27 14:13:09.123602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:38.203 [2024-11-27 14:13:09.123636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:38.203 pt2 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 [2024-11-27 14:13:09.134827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.203 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.461 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.461 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.461 "name": "raid_bdev1", 00:13:38.461 "uuid": "c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a", 00:13:38.461 "strip_size_kb": 64, 00:13:38.461 "state": "configuring", 00:13:38.461 "raid_level": "concat", 00:13:38.461 "superblock": true, 00:13:38.461 "num_base_bdevs": 3, 00:13:38.461 "num_base_bdevs_discovered": 1, 00:13:38.461 "num_base_bdevs_operational": 3, 00:13:38.461 "base_bdevs_list": [ 00:13:38.461 { 00:13:38.461 "name": "pt1", 00:13:38.461 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.461 "is_configured": true, 00:13:38.461 "data_offset": 2048, 00:13:38.461 "data_size": 63488 00:13:38.461 }, 00:13:38.461 { 00:13:38.461 "name": null, 00:13:38.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.461 "is_configured": false, 00:13:38.461 "data_offset": 0, 00:13:38.461 "data_size": 63488 00:13:38.461 }, 00:13:38.461 { 00:13:38.461 "name": null, 00:13:38.461 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.461 "is_configured": false, 00:13:38.461 "data_offset": 2048, 00:13:38.461 "data_size": 63488 00:13:38.461 } 00:13:38.461 ] 00:13:38.461 }' 00:13:38.461 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.461 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.721 [2024-11-27 14:13:09.578063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.721 [2024-11-27 14:13:09.578152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.721 [2024-11-27 14:13:09.578172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:38.721 [2024-11-27 14:13:09.578183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.721 [2024-11-27 14:13:09.578700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.721 [2024-11-27 14:13:09.578730] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:38.721 [2024-11-27 14:13:09.578823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:38.721 [2024-11-27 14:13:09.578851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:38.721 pt2 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.721 [2024-11-27 14:13:09.590017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:38.721 [2024-11-27 14:13:09.590074] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.721 [2024-11-27 14:13:09.590090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:38.721 [2024-11-27 14:13:09.590099] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.721 [2024-11-27 14:13:09.590507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.721 [2024-11-27 14:13:09.590534] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:38.721 [2024-11-27 14:13:09.590602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:38.721 [2024-11-27 14:13:09.590629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:38.721 [2024-11-27 14:13:09.590747] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:38.721 [2024-11-27 14:13:09.590758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:38.721 [2024-11-27 14:13:09.590999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:38.721 [2024-11-27 14:13:09.591160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:38.721 [2024-11-27 14:13:09.591174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:38.721 [2024-11-27 14:13:09.591310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.721 pt3 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.721 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.722 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.722 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.722 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.722 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.722 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.722 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.722 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.722 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.722 "name": "raid_bdev1", 00:13:38.722 "uuid": "c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a", 00:13:38.722 "strip_size_kb": 64, 00:13:38.722 "state": "online", 00:13:38.722 "raid_level": "concat", 00:13:38.722 "superblock": true, 00:13:38.722 "num_base_bdevs": 3, 00:13:38.722 "num_base_bdevs_discovered": 3, 00:13:38.722 "num_base_bdevs_operational": 3, 00:13:38.722 "base_bdevs_list": [ 00:13:38.722 { 00:13:38.722 "name": "pt1", 00:13:38.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.722 "is_configured": true, 00:13:38.722 "data_offset": 2048, 00:13:38.722 "data_size": 63488 00:13:38.722 }, 00:13:38.722 { 00:13:38.722 "name": "pt2", 00:13:38.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.722 "is_configured": true, 00:13:38.722 "data_offset": 2048, 00:13:38.722 "data_size": 63488 00:13:38.722 }, 00:13:38.722 { 00:13:38.722 "name": "pt3", 00:13:38.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.722 "is_configured": true, 00:13:38.722 "data_offset": 2048, 00:13:38.722 "data_size": 63488 00:13:38.722 } 00:13:38.722 ] 00:13:38.722 }' 00:13:38.722 14:13:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.722 14:13:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.290 [2024-11-27 14:13:10.057584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.290 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:39.290 "name": "raid_bdev1", 00:13:39.290 "aliases": [ 00:13:39.290 "c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a" 00:13:39.290 ], 00:13:39.290 "product_name": "Raid Volume", 00:13:39.290 "block_size": 512, 00:13:39.290 "num_blocks": 190464, 00:13:39.290 "uuid": "c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a", 00:13:39.290 "assigned_rate_limits": { 00:13:39.290 "rw_ios_per_sec": 0, 00:13:39.290 "rw_mbytes_per_sec": 0, 00:13:39.290 "r_mbytes_per_sec": 0, 00:13:39.290 "w_mbytes_per_sec": 0 00:13:39.290 }, 00:13:39.290 "claimed": false, 00:13:39.290 "zoned": false, 00:13:39.290 "supported_io_types": { 00:13:39.290 "read": true, 00:13:39.290 "write": true, 00:13:39.290 "unmap": true, 00:13:39.290 "flush": true, 00:13:39.290 "reset": true, 00:13:39.290 "nvme_admin": false, 00:13:39.290 "nvme_io": false, 00:13:39.290 "nvme_io_md": false, 00:13:39.290 "write_zeroes": true, 00:13:39.290 "zcopy": false, 00:13:39.290 "get_zone_info": false, 00:13:39.290 "zone_management": false, 00:13:39.290 "zone_append": false, 00:13:39.290 "compare": false, 00:13:39.290 "compare_and_write": false, 00:13:39.290 "abort": false, 00:13:39.290 "seek_hole": false, 00:13:39.290 "seek_data": false, 00:13:39.290 "copy": false, 00:13:39.290 "nvme_iov_md": false 00:13:39.290 }, 00:13:39.290 "memory_domains": [ 00:13:39.290 { 00:13:39.290 "dma_device_id": "system", 00:13:39.290 "dma_device_type": 1 00:13:39.290 }, 00:13:39.290 { 00:13:39.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.290 "dma_device_type": 2 00:13:39.290 }, 00:13:39.290 { 00:13:39.290 "dma_device_id": "system", 00:13:39.290 "dma_device_type": 1 00:13:39.290 }, 00:13:39.290 { 00:13:39.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.290 "dma_device_type": 2 00:13:39.290 }, 00:13:39.290 { 00:13:39.290 "dma_device_id": "system", 00:13:39.290 "dma_device_type": 1 00:13:39.290 }, 00:13:39.290 { 00:13:39.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.290 "dma_device_type": 2 00:13:39.290 } 00:13:39.290 ], 00:13:39.290 "driver_specific": { 00:13:39.290 "raid": { 00:13:39.290 "uuid": "c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a", 00:13:39.290 "strip_size_kb": 64, 00:13:39.290 "state": "online", 00:13:39.290 "raid_level": "concat", 00:13:39.290 "superblock": true, 00:13:39.290 "num_base_bdevs": 3, 00:13:39.290 "num_base_bdevs_discovered": 3, 00:13:39.290 "num_base_bdevs_operational": 3, 00:13:39.290 "base_bdevs_list": [ 00:13:39.290 { 00:13:39.290 "name": "pt1", 00:13:39.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.291 "is_configured": true, 00:13:39.291 "data_offset": 2048, 00:13:39.291 "data_size": 63488 00:13:39.291 }, 00:13:39.291 { 00:13:39.291 "name": "pt2", 00:13:39.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.291 "is_configured": true, 00:13:39.291 "data_offset": 2048, 00:13:39.291 "data_size": 63488 00:13:39.291 }, 00:13:39.291 { 00:13:39.291 "name": "pt3", 00:13:39.291 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.291 "is_configured": true, 00:13:39.291 "data_offset": 2048, 00:13:39.291 "data_size": 63488 00:13:39.291 } 00:13:39.291 ] 00:13:39.291 } 00:13:39.291 } 00:13:39.291 }' 00:13:39.291 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:39.291 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:39.291 pt2 00:13:39.291 pt3' 00:13:39.291 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.291 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:39.291 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.291 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.291 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:39.291 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.291 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.291 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.551 [2024-11-27 14:13:10.341015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.551 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a '!=' c0ca26d3-9b00-45f4-a2c9-5d2e6da22b1a ']' 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67057 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67057 ']' 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67057 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67057 00:13:39.552 killing process with pid 67057 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67057' 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67057 00:13:39.552 [2024-11-27 14:13:10.423647] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.552 [2024-11-27 14:13:10.423747] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.552 14:13:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67057 00:13:39.552 [2024-11-27 14:13:10.423824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.552 [2024-11-27 14:13:10.423837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:39.812 [2024-11-27 14:13:10.733581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:41.191 ************************************ 00:13:41.191 END TEST raid_superblock_test 00:13:41.191 ************************************ 00:13:41.191 14:13:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:41.191 00:13:41.191 real 0m5.297s 00:13:41.191 user 0m7.621s 00:13:41.191 sys 0m0.871s 00:13:41.191 14:13:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.191 14:13:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.191 14:13:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:13:41.191 14:13:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:41.191 14:13:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.191 14:13:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:41.191 ************************************ 00:13:41.191 START TEST raid_read_error_test 00:13:41.191 ************************************ 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:41.191 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ArAlw1YFXH 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67310 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67310 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67310 ']' 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.192 14:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.192 [2024-11-27 14:13:12.060248] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:41.192 [2024-11-27 14:13:12.060458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67310 ] 00:13:41.451 [2024-11-27 14:13:12.238799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.451 [2024-11-27 14:13:12.359785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.710 [2024-11-27 14:13:12.572596] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.710 [2024-11-27 14:13:12.572674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.281 BaseBdev1_malloc 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.281 true 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.281 [2024-11-27 14:13:12.986704] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:42.281 [2024-11-27 14:13:12.986778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.281 [2024-11-27 14:13:12.986808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:42.281 [2024-11-27 14:13:12.986822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.281 [2024-11-27 14:13:12.989197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.281 [2024-11-27 14:13:12.989249] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:42.281 BaseBdev1 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.281 14:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.281 BaseBdev2_malloc 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.281 true 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.281 [2024-11-27 14:13:13.054093] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:42.281 [2024-11-27 14:13:13.054262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.281 [2024-11-27 14:13:13.054300] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:42.281 [2024-11-27 14:13:13.054319] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.281 [2024-11-27 14:13:13.056851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.281 [2024-11-27 14:13:13.056903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:42.281 BaseBdev2 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.281 BaseBdev3_malloc 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.281 true 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.281 [2024-11-27 14:13:13.137667] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:42.281 [2024-11-27 14:13:13.137731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.281 [2024-11-27 14:13:13.137760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:42.281 [2024-11-27 14:13:13.137777] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.281 [2024-11-27 14:13:13.140370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.281 [2024-11-27 14:13:13.140411] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:42.281 BaseBdev3 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.281 [2024-11-27 14:13:13.149746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.281 [2024-11-27 14:13:13.151711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.281 [2024-11-27 14:13:13.151857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.281 [2024-11-27 14:13:13.152136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:42.281 [2024-11-27 14:13:13.152186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:42.281 [2024-11-27 14:13:13.152487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:42.281 [2024-11-27 14:13:13.152710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:42.281 [2024-11-27 14:13:13.152761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:42.281 [2024-11-27 14:13:13.152962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.281 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.282 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.282 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.282 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.282 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.282 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.282 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.282 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.282 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.282 "name": "raid_bdev1", 00:13:42.282 "uuid": "46d05d9d-cf7b-40e1-946b-cc73f1870e4c", 00:13:42.282 "strip_size_kb": 64, 00:13:42.282 "state": "online", 00:13:42.282 "raid_level": "concat", 00:13:42.282 "superblock": true, 00:13:42.282 "num_base_bdevs": 3, 00:13:42.282 "num_base_bdevs_discovered": 3, 00:13:42.282 "num_base_bdevs_operational": 3, 00:13:42.282 "base_bdevs_list": [ 00:13:42.282 { 00:13:42.282 "name": "BaseBdev1", 00:13:42.282 "uuid": "e23bb0f5-a5cf-53c9-8bcf-29e244ac6f34", 00:13:42.282 "is_configured": true, 00:13:42.282 "data_offset": 2048, 00:13:42.282 "data_size": 63488 00:13:42.282 }, 00:13:42.282 { 00:13:42.282 "name": "BaseBdev2", 00:13:42.282 "uuid": "5e6f4d39-7857-5a75-8860-ee8712542de8", 00:13:42.282 "is_configured": true, 00:13:42.282 "data_offset": 2048, 00:13:42.282 "data_size": 63488 00:13:42.282 }, 00:13:42.282 { 00:13:42.282 "name": "BaseBdev3", 00:13:42.282 "uuid": "daaf9e5b-8666-5f3e-be87-119f753879e7", 00:13:42.282 "is_configured": true, 00:13:42.282 "data_offset": 2048, 00:13:42.282 "data_size": 63488 00:13:42.282 } 00:13:42.282 ] 00:13:42.282 }' 00:13:42.282 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.282 14:13:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.850 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:42.850 14:13:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:42.850 [2024-11-27 14:13:13.750184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.788 "name": "raid_bdev1", 00:13:43.788 "uuid": "46d05d9d-cf7b-40e1-946b-cc73f1870e4c", 00:13:43.788 "strip_size_kb": 64, 00:13:43.788 "state": "online", 00:13:43.788 "raid_level": "concat", 00:13:43.788 "superblock": true, 00:13:43.788 "num_base_bdevs": 3, 00:13:43.788 "num_base_bdevs_discovered": 3, 00:13:43.788 "num_base_bdevs_operational": 3, 00:13:43.788 "base_bdevs_list": [ 00:13:43.788 { 00:13:43.788 "name": "BaseBdev1", 00:13:43.788 "uuid": "e23bb0f5-a5cf-53c9-8bcf-29e244ac6f34", 00:13:43.788 "is_configured": true, 00:13:43.788 "data_offset": 2048, 00:13:43.788 "data_size": 63488 00:13:43.788 }, 00:13:43.788 { 00:13:43.788 "name": "BaseBdev2", 00:13:43.788 "uuid": "5e6f4d39-7857-5a75-8860-ee8712542de8", 00:13:43.788 "is_configured": true, 00:13:43.788 "data_offset": 2048, 00:13:43.788 "data_size": 63488 00:13:43.788 }, 00:13:43.788 { 00:13:43.788 "name": "BaseBdev3", 00:13:43.788 "uuid": "daaf9e5b-8666-5f3e-be87-119f753879e7", 00:13:43.788 "is_configured": true, 00:13:43.788 "data_offset": 2048, 00:13:43.788 "data_size": 63488 00:13:43.788 } 00:13:43.788 ] 00:13:43.788 }' 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.788 14:13:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.357 [2024-11-27 14:13:15.102544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:44.357 [2024-11-27 14:13:15.102655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:44.357 [2024-11-27 14:13:15.105992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.357 [2024-11-27 14:13:15.106111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.357 [2024-11-27 14:13:15.106226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.357 [2024-11-27 14:13:15.106287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:44.357 { 00:13:44.357 "results": [ 00:13:44.357 { 00:13:44.357 "job": "raid_bdev1", 00:13:44.357 "core_mask": "0x1", 00:13:44.357 "workload": "randrw", 00:13:44.357 "percentage": 50, 00:13:44.357 "status": "finished", 00:13:44.357 "queue_depth": 1, 00:13:44.357 "io_size": 131072, 00:13:44.357 "runtime": 1.353178, 00:13:44.357 "iops": 14524.327176469023, 00:13:44.357 "mibps": 1815.540897058628, 00:13:44.357 "io_failed": 1, 00:13:44.357 "io_timeout": 0, 00:13:44.357 "avg_latency_us": 95.29810506343598, 00:13:44.357 "min_latency_us": 27.165065502183406, 00:13:44.357 "max_latency_us": 1438.071615720524 00:13:44.357 } 00:13:44.357 ], 00:13:44.357 "core_count": 1 00:13:44.357 } 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67310 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67310 ']' 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67310 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67310 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67310' 00:13:44.357 killing process with pid 67310 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67310 00:13:44.357 [2024-11-27 14:13:15.158135] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.357 14:13:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67310 00:13:44.636 [2024-11-27 14:13:15.414345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:46.044 14:13:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ArAlw1YFXH 00:13:46.044 14:13:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:46.044 14:13:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:46.044 14:13:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:13:46.044 14:13:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:46.044 14:13:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:46.044 14:13:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:46.044 14:13:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:13:46.044 00:13:46.044 real 0m4.730s 00:13:46.044 user 0m5.642s 00:13:46.044 sys 0m0.581s 00:13:46.044 14:13:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.044 ************************************ 00:13:46.044 END TEST raid_read_error_test 00:13:46.044 ************************************ 00:13:46.044 14:13:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.044 14:13:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:46.044 14:13:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:46.044 14:13:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.044 14:13:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.044 ************************************ 00:13:46.044 START TEST raid_write_error_test 00:13:46.044 ************************************ 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Peo8WzFUcb 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67461 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67461 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67461 ']' 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.044 14:13:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.044 [2024-11-27 14:13:16.859057] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:46.044 [2024-11-27 14:13:16.859293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67461 ] 00:13:46.305 [2024-11-27 14:13:17.037926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.305 [2024-11-27 14:13:17.159412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.565 [2024-11-27 14:13:17.371500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.565 [2024-11-27 14:13:17.371608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.824 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.824 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:46.824 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.824 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:46.824 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.824 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.084 BaseBdev1_malloc 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.084 true 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.084 [2024-11-27 14:13:17.807971] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:47.084 [2024-11-27 14:13:17.808044] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.084 [2024-11-27 14:13:17.808066] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:47.084 [2024-11-27 14:13:17.808078] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.084 [2024-11-27 14:13:17.810424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.084 [2024-11-27 14:13:17.810468] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.084 BaseBdev1 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.084 BaseBdev2_malloc 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.084 true 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.084 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.084 [2024-11-27 14:13:17.870936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:47.084 [2024-11-27 14:13:17.871002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.084 [2024-11-27 14:13:17.871022] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:47.084 [2024-11-27 14:13:17.871033] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.085 [2024-11-27 14:13:17.873320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.085 [2024-11-27 14:13:17.873362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:47.085 BaseBdev2 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.085 BaseBdev3_malloc 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.085 true 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.085 [2024-11-27 14:13:17.945785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:47.085 [2024-11-27 14:13:17.945840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.085 [2024-11-27 14:13:17.945859] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:47.085 [2024-11-27 14:13:17.945870] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.085 [2024-11-27 14:13:17.948182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.085 [2024-11-27 14:13:17.948269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:47.085 BaseBdev3 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.085 [2024-11-27 14:13:17.953874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.085 [2024-11-27 14:13:17.955754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.085 [2024-11-27 14:13:17.955830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.085 [2024-11-27 14:13:17.956052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:47.085 [2024-11-27 14:13:17.956065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:47.085 [2024-11-27 14:13:17.956431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:47.085 [2024-11-27 14:13:17.956655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:47.085 [2024-11-27 14:13:17.956708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:47.085 [2024-11-27 14:13:17.956919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.085 14:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.085 14:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.085 "name": "raid_bdev1", 00:13:47.085 "uuid": "a8a8b0eb-59b9-4cbd-9579-4615c8c6b59d", 00:13:47.085 "strip_size_kb": 64, 00:13:47.085 "state": "online", 00:13:47.085 "raid_level": "concat", 00:13:47.085 "superblock": true, 00:13:47.085 "num_base_bdevs": 3, 00:13:47.085 "num_base_bdevs_discovered": 3, 00:13:47.085 "num_base_bdevs_operational": 3, 00:13:47.085 "base_bdevs_list": [ 00:13:47.085 { 00:13:47.085 "name": "BaseBdev1", 00:13:47.085 "uuid": "1ca4daca-d0a3-50b3-b72a-61de636af856", 00:13:47.085 "is_configured": true, 00:13:47.085 "data_offset": 2048, 00:13:47.085 "data_size": 63488 00:13:47.085 }, 00:13:47.085 { 00:13:47.085 "name": "BaseBdev2", 00:13:47.085 "uuid": "9c00ed9f-4684-5b5b-af1b-f496f4a11d19", 00:13:47.085 "is_configured": true, 00:13:47.085 "data_offset": 2048, 00:13:47.085 "data_size": 63488 00:13:47.085 }, 00:13:47.085 { 00:13:47.085 "name": "BaseBdev3", 00:13:47.085 "uuid": "9b90787c-84e1-5954-a419-ca3115904016", 00:13:47.085 "is_configured": true, 00:13:47.085 "data_offset": 2048, 00:13:47.085 "data_size": 63488 00:13:47.085 } 00:13:47.085 ] 00:13:47.085 }' 00:13:47.085 14:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.085 14:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.654 14:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:47.654 14:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:47.654 [2024-11-27 14:13:18.514363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.592 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.593 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.593 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.593 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.593 "name": "raid_bdev1", 00:13:48.593 "uuid": "a8a8b0eb-59b9-4cbd-9579-4615c8c6b59d", 00:13:48.593 "strip_size_kb": 64, 00:13:48.593 "state": "online", 00:13:48.593 "raid_level": "concat", 00:13:48.593 "superblock": true, 00:13:48.593 "num_base_bdevs": 3, 00:13:48.593 "num_base_bdevs_discovered": 3, 00:13:48.593 "num_base_bdevs_operational": 3, 00:13:48.593 "base_bdevs_list": [ 00:13:48.593 { 00:13:48.593 "name": "BaseBdev1", 00:13:48.593 "uuid": "1ca4daca-d0a3-50b3-b72a-61de636af856", 00:13:48.593 "is_configured": true, 00:13:48.593 "data_offset": 2048, 00:13:48.593 "data_size": 63488 00:13:48.593 }, 00:13:48.593 { 00:13:48.593 "name": "BaseBdev2", 00:13:48.593 "uuid": "9c00ed9f-4684-5b5b-af1b-f496f4a11d19", 00:13:48.593 "is_configured": true, 00:13:48.593 "data_offset": 2048, 00:13:48.593 "data_size": 63488 00:13:48.593 }, 00:13:48.593 { 00:13:48.593 "name": "BaseBdev3", 00:13:48.593 "uuid": "9b90787c-84e1-5954-a419-ca3115904016", 00:13:48.593 "is_configured": true, 00:13:48.593 "data_offset": 2048, 00:13:48.593 "data_size": 63488 00:13:48.593 } 00:13:48.593 ] 00:13:48.593 }' 00:13:48.593 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.593 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.161 [2024-11-27 14:13:19.842405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.161 [2024-11-27 14:13:19.842507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.161 [2024-11-27 14:13:19.845577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.161 [2024-11-27 14:13:19.845681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.161 [2024-11-27 14:13:19.845742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.161 [2024-11-27 14:13:19.845806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:49.161 { 00:13:49.161 "results": [ 00:13:49.161 { 00:13:49.161 "job": "raid_bdev1", 00:13:49.161 "core_mask": "0x1", 00:13:49.161 "workload": "randrw", 00:13:49.161 "percentage": 50, 00:13:49.161 "status": "finished", 00:13:49.161 "queue_depth": 1, 00:13:49.161 "io_size": 131072, 00:13:49.161 "runtime": 1.32879, 00:13:49.161 "iops": 14546.316573724967, 00:13:49.161 "mibps": 1818.289571715621, 00:13:49.161 "io_failed": 1, 00:13:49.161 "io_timeout": 0, 00:13:49.161 "avg_latency_us": 95.14333743733862, 00:13:49.161 "min_latency_us": 27.72401746724891, 00:13:49.161 "max_latency_us": 1688.482096069869 00:13:49.161 } 00:13:49.161 ], 00:13:49.161 "core_count": 1 00:13:49.161 } 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67461 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67461 ']' 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67461 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67461 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.161 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.162 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67461' 00:13:49.162 killing process with pid 67461 00:13:49.162 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67461 00:13:49.162 [2024-11-27 14:13:19.876776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.162 14:13:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67461 00:13:49.421 [2024-11-27 14:13:20.122974] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.802 14:13:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:50.802 14:13:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:50.802 14:13:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Peo8WzFUcb 00:13:50.802 14:13:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:13:50.802 14:13:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:50.802 14:13:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:50.802 14:13:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:50.802 ************************************ 00:13:50.802 END TEST raid_write_error_test 00:13:50.802 ************************************ 00:13:50.802 14:13:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:13:50.802 00:13:50.802 real 0m4.663s 00:13:50.802 user 0m5.556s 00:13:50.802 sys 0m0.550s 00:13:50.802 14:13:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.802 14:13:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.802 14:13:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:50.802 14:13:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:50.802 14:13:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:50.802 14:13:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.802 14:13:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.802 ************************************ 00:13:50.802 START TEST raid_state_function_test 00:13:50.802 ************************************ 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:50.802 Process raid pid: 67605 00:13:50.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67605 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67605' 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67605 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67605 ']' 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.802 14:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.802 [2024-11-27 14:13:21.568028] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:13:50.802 [2024-11-27 14:13:21.568195] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.802 [2024-11-27 14:13:21.746533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.061 [2024-11-27 14:13:21.875257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.319 [2024-11-27 14:13:22.087624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.319 [2024-11-27 14:13:22.087675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.579 [2024-11-27 14:13:22.457276] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.579 [2024-11-27 14:13:22.457342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.579 [2024-11-27 14:13:22.457353] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.579 [2024-11-27 14:13:22.457363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.579 [2024-11-27 14:13:22.457369] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.579 [2024-11-27 14:13:22.457379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.579 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.579 "name": "Existed_Raid", 00:13:51.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.579 "strip_size_kb": 0, 00:13:51.579 "state": "configuring", 00:13:51.579 "raid_level": "raid1", 00:13:51.579 "superblock": false, 00:13:51.579 "num_base_bdevs": 3, 00:13:51.579 "num_base_bdevs_discovered": 0, 00:13:51.580 "num_base_bdevs_operational": 3, 00:13:51.580 "base_bdevs_list": [ 00:13:51.580 { 00:13:51.580 "name": "BaseBdev1", 00:13:51.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.580 "is_configured": false, 00:13:51.580 "data_offset": 0, 00:13:51.580 "data_size": 0 00:13:51.580 }, 00:13:51.580 { 00:13:51.580 "name": "BaseBdev2", 00:13:51.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.580 "is_configured": false, 00:13:51.580 "data_offset": 0, 00:13:51.580 "data_size": 0 00:13:51.580 }, 00:13:51.580 { 00:13:51.580 "name": "BaseBdev3", 00:13:51.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.580 "is_configured": false, 00:13:51.580 "data_offset": 0, 00:13:51.580 "data_size": 0 00:13:51.580 } 00:13:51.580 ] 00:13:51.580 }' 00:13:51.580 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.580 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.164 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:52.164 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.164 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.164 [2024-11-27 14:13:22.956343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.164 [2024-11-27 14:13:22.956454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:52.164 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.164 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:52.164 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.164 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.164 [2024-11-27 14:13:22.968316] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.164 [2024-11-27 14:13:22.968405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.164 [2024-11-27 14:13:22.968463] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.164 [2024-11-27 14:13:22.968493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.164 [2024-11-27 14:13:22.968540] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.164 [2024-11-27 14:13:22.968579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.164 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.164 14:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:52.164 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.164 14:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.165 [2024-11-27 14:13:23.016946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.165 BaseBdev1 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.165 [ 00:13:52.165 { 00:13:52.165 "name": "BaseBdev1", 00:13:52.165 "aliases": [ 00:13:52.165 "b80a5e59-3fcd-48c3-833e-bf370542bc52" 00:13:52.165 ], 00:13:52.165 "product_name": "Malloc disk", 00:13:52.165 "block_size": 512, 00:13:52.165 "num_blocks": 65536, 00:13:52.165 "uuid": "b80a5e59-3fcd-48c3-833e-bf370542bc52", 00:13:52.165 "assigned_rate_limits": { 00:13:52.165 "rw_ios_per_sec": 0, 00:13:52.165 "rw_mbytes_per_sec": 0, 00:13:52.165 "r_mbytes_per_sec": 0, 00:13:52.165 "w_mbytes_per_sec": 0 00:13:52.165 }, 00:13:52.165 "claimed": true, 00:13:52.165 "claim_type": "exclusive_write", 00:13:52.165 "zoned": false, 00:13:52.165 "supported_io_types": { 00:13:52.165 "read": true, 00:13:52.165 "write": true, 00:13:52.165 "unmap": true, 00:13:52.165 "flush": true, 00:13:52.165 "reset": true, 00:13:52.165 "nvme_admin": false, 00:13:52.165 "nvme_io": false, 00:13:52.165 "nvme_io_md": false, 00:13:52.165 "write_zeroes": true, 00:13:52.165 "zcopy": true, 00:13:52.165 "get_zone_info": false, 00:13:52.165 "zone_management": false, 00:13:52.165 "zone_append": false, 00:13:52.165 "compare": false, 00:13:52.165 "compare_and_write": false, 00:13:52.165 "abort": true, 00:13:52.165 "seek_hole": false, 00:13:52.165 "seek_data": false, 00:13:52.165 "copy": true, 00:13:52.165 "nvme_iov_md": false 00:13:52.165 }, 00:13:52.165 "memory_domains": [ 00:13:52.165 { 00:13:52.165 "dma_device_id": "system", 00:13:52.165 "dma_device_type": 1 00:13:52.165 }, 00:13:52.165 { 00:13:52.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.165 "dma_device_type": 2 00:13:52.165 } 00:13:52.165 ], 00:13:52.165 "driver_specific": {} 00:13:52.165 } 00:13:52.165 ] 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.165 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.425 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.425 "name": "Existed_Raid", 00:13:52.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.425 "strip_size_kb": 0, 00:13:52.425 "state": "configuring", 00:13:52.425 "raid_level": "raid1", 00:13:52.425 "superblock": false, 00:13:52.425 "num_base_bdevs": 3, 00:13:52.425 "num_base_bdevs_discovered": 1, 00:13:52.425 "num_base_bdevs_operational": 3, 00:13:52.425 "base_bdevs_list": [ 00:13:52.425 { 00:13:52.425 "name": "BaseBdev1", 00:13:52.425 "uuid": "b80a5e59-3fcd-48c3-833e-bf370542bc52", 00:13:52.425 "is_configured": true, 00:13:52.425 "data_offset": 0, 00:13:52.425 "data_size": 65536 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "name": "BaseBdev2", 00:13:52.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.425 "is_configured": false, 00:13:52.425 "data_offset": 0, 00:13:52.425 "data_size": 0 00:13:52.425 }, 00:13:52.425 { 00:13:52.425 "name": "BaseBdev3", 00:13:52.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.425 "is_configured": false, 00:13:52.425 "data_offset": 0, 00:13:52.425 "data_size": 0 00:13:52.425 } 00:13:52.425 ] 00:13:52.425 }' 00:13:52.425 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.425 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.685 [2024-11-27 14:13:23.512189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.685 [2024-11-27 14:13:23.512250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.685 [2024-11-27 14:13:23.520241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.685 [2024-11-27 14:13:23.522220] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.685 [2024-11-27 14:13:23.522301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.685 [2024-11-27 14:13:23.522331] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.685 [2024-11-27 14:13:23.522353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.685 "name": "Existed_Raid", 00:13:52.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.685 "strip_size_kb": 0, 00:13:52.685 "state": "configuring", 00:13:52.685 "raid_level": "raid1", 00:13:52.685 "superblock": false, 00:13:52.685 "num_base_bdevs": 3, 00:13:52.685 "num_base_bdevs_discovered": 1, 00:13:52.685 "num_base_bdevs_operational": 3, 00:13:52.685 "base_bdevs_list": [ 00:13:52.685 { 00:13:52.685 "name": "BaseBdev1", 00:13:52.685 "uuid": "b80a5e59-3fcd-48c3-833e-bf370542bc52", 00:13:52.685 "is_configured": true, 00:13:52.685 "data_offset": 0, 00:13:52.685 "data_size": 65536 00:13:52.685 }, 00:13:52.685 { 00:13:52.685 "name": "BaseBdev2", 00:13:52.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.685 "is_configured": false, 00:13:52.685 "data_offset": 0, 00:13:52.685 "data_size": 0 00:13:52.685 }, 00:13:52.685 { 00:13:52.685 "name": "BaseBdev3", 00:13:52.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.685 "is_configured": false, 00:13:52.685 "data_offset": 0, 00:13:52.685 "data_size": 0 00:13:52.685 } 00:13:52.685 ] 00:13:52.685 }' 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.685 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.254 [2024-11-27 14:13:23.996951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.254 BaseBdev2 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.254 14:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.254 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.254 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:53.254 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.254 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.254 [ 00:13:53.254 { 00:13:53.254 "name": "BaseBdev2", 00:13:53.254 "aliases": [ 00:13:53.254 "004d27ed-890c-4012-ba97-ea4a060be079" 00:13:53.254 ], 00:13:53.254 "product_name": "Malloc disk", 00:13:53.254 "block_size": 512, 00:13:53.254 "num_blocks": 65536, 00:13:53.254 "uuid": "004d27ed-890c-4012-ba97-ea4a060be079", 00:13:53.254 "assigned_rate_limits": { 00:13:53.254 "rw_ios_per_sec": 0, 00:13:53.254 "rw_mbytes_per_sec": 0, 00:13:53.254 "r_mbytes_per_sec": 0, 00:13:53.254 "w_mbytes_per_sec": 0 00:13:53.254 }, 00:13:53.254 "claimed": true, 00:13:53.254 "claim_type": "exclusive_write", 00:13:53.254 "zoned": false, 00:13:53.254 "supported_io_types": { 00:13:53.254 "read": true, 00:13:53.254 "write": true, 00:13:53.254 "unmap": true, 00:13:53.254 "flush": true, 00:13:53.254 "reset": true, 00:13:53.254 "nvme_admin": false, 00:13:53.254 "nvme_io": false, 00:13:53.254 "nvme_io_md": false, 00:13:53.254 "write_zeroes": true, 00:13:53.254 "zcopy": true, 00:13:53.254 "get_zone_info": false, 00:13:53.254 "zone_management": false, 00:13:53.254 "zone_append": false, 00:13:53.254 "compare": false, 00:13:53.254 "compare_and_write": false, 00:13:53.254 "abort": true, 00:13:53.254 "seek_hole": false, 00:13:53.254 "seek_data": false, 00:13:53.254 "copy": true, 00:13:53.254 "nvme_iov_md": false 00:13:53.254 }, 00:13:53.254 "memory_domains": [ 00:13:53.254 { 00:13:53.254 "dma_device_id": "system", 00:13:53.254 "dma_device_type": 1 00:13:53.254 }, 00:13:53.254 { 00:13:53.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.254 "dma_device_type": 2 00:13:53.254 } 00:13:53.254 ], 00:13:53.254 "driver_specific": {} 00:13:53.254 } 00:13:53.254 ] 00:13:53.254 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.254 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:53.254 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.255 "name": "Existed_Raid", 00:13:53.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.255 "strip_size_kb": 0, 00:13:53.255 "state": "configuring", 00:13:53.255 "raid_level": "raid1", 00:13:53.255 "superblock": false, 00:13:53.255 "num_base_bdevs": 3, 00:13:53.255 "num_base_bdevs_discovered": 2, 00:13:53.255 "num_base_bdevs_operational": 3, 00:13:53.255 "base_bdevs_list": [ 00:13:53.255 { 00:13:53.255 "name": "BaseBdev1", 00:13:53.255 "uuid": "b80a5e59-3fcd-48c3-833e-bf370542bc52", 00:13:53.255 "is_configured": true, 00:13:53.255 "data_offset": 0, 00:13:53.255 "data_size": 65536 00:13:53.255 }, 00:13:53.255 { 00:13:53.255 "name": "BaseBdev2", 00:13:53.255 "uuid": "004d27ed-890c-4012-ba97-ea4a060be079", 00:13:53.255 "is_configured": true, 00:13:53.255 "data_offset": 0, 00:13:53.255 "data_size": 65536 00:13:53.255 }, 00:13:53.255 { 00:13:53.255 "name": "BaseBdev3", 00:13:53.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.255 "is_configured": false, 00:13:53.255 "data_offset": 0, 00:13:53.255 "data_size": 0 00:13:53.255 } 00:13:53.255 ] 00:13:53.255 }' 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.255 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.824 [2024-11-27 14:13:24.537884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.824 [2024-11-27 14:13:24.537943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:53.824 [2024-11-27 14:13:24.537958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:53.824 [2024-11-27 14:13:24.538315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:53.824 [2024-11-27 14:13:24.538546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:53.824 [2024-11-27 14:13:24.538567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:53.824 [2024-11-27 14:13:24.538899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.824 BaseBdev3 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.824 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.824 [ 00:13:53.824 { 00:13:53.824 "name": "BaseBdev3", 00:13:53.824 "aliases": [ 00:13:53.824 "e376edde-99b6-4726-a93f-873ea1ba0d3b" 00:13:53.824 ], 00:13:53.824 "product_name": "Malloc disk", 00:13:53.824 "block_size": 512, 00:13:53.824 "num_blocks": 65536, 00:13:53.824 "uuid": "e376edde-99b6-4726-a93f-873ea1ba0d3b", 00:13:53.824 "assigned_rate_limits": { 00:13:53.824 "rw_ios_per_sec": 0, 00:13:53.824 "rw_mbytes_per_sec": 0, 00:13:53.824 "r_mbytes_per_sec": 0, 00:13:53.824 "w_mbytes_per_sec": 0 00:13:53.824 }, 00:13:53.824 "claimed": true, 00:13:53.824 "claim_type": "exclusive_write", 00:13:53.824 "zoned": false, 00:13:53.824 "supported_io_types": { 00:13:53.824 "read": true, 00:13:53.824 "write": true, 00:13:53.825 "unmap": true, 00:13:53.825 "flush": true, 00:13:53.825 "reset": true, 00:13:53.825 "nvme_admin": false, 00:13:53.825 "nvme_io": false, 00:13:53.825 "nvme_io_md": false, 00:13:53.825 "write_zeroes": true, 00:13:53.825 "zcopy": true, 00:13:53.825 "get_zone_info": false, 00:13:53.825 "zone_management": false, 00:13:53.825 "zone_append": false, 00:13:53.825 "compare": false, 00:13:53.825 "compare_and_write": false, 00:13:53.825 "abort": true, 00:13:53.825 "seek_hole": false, 00:13:53.825 "seek_data": false, 00:13:53.825 "copy": true, 00:13:53.825 "nvme_iov_md": false 00:13:53.825 }, 00:13:53.825 "memory_domains": [ 00:13:53.825 { 00:13:53.825 "dma_device_id": "system", 00:13:53.825 "dma_device_type": 1 00:13:53.825 }, 00:13:53.825 { 00:13:53.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.825 "dma_device_type": 2 00:13:53.825 } 00:13:53.825 ], 00:13:53.825 "driver_specific": {} 00:13:53.825 } 00:13:53.825 ] 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.825 "name": "Existed_Raid", 00:13:53.825 "uuid": "1f708ffe-0a05-45f1-b859-3a9a43f745bf", 00:13:53.825 "strip_size_kb": 0, 00:13:53.825 "state": "online", 00:13:53.825 "raid_level": "raid1", 00:13:53.825 "superblock": false, 00:13:53.825 "num_base_bdevs": 3, 00:13:53.825 "num_base_bdevs_discovered": 3, 00:13:53.825 "num_base_bdevs_operational": 3, 00:13:53.825 "base_bdevs_list": [ 00:13:53.825 { 00:13:53.825 "name": "BaseBdev1", 00:13:53.825 "uuid": "b80a5e59-3fcd-48c3-833e-bf370542bc52", 00:13:53.825 "is_configured": true, 00:13:53.825 "data_offset": 0, 00:13:53.825 "data_size": 65536 00:13:53.825 }, 00:13:53.825 { 00:13:53.825 "name": "BaseBdev2", 00:13:53.825 "uuid": "004d27ed-890c-4012-ba97-ea4a060be079", 00:13:53.825 "is_configured": true, 00:13:53.825 "data_offset": 0, 00:13:53.825 "data_size": 65536 00:13:53.825 }, 00:13:53.825 { 00:13:53.825 "name": "BaseBdev3", 00:13:53.825 "uuid": "e376edde-99b6-4726-a93f-873ea1ba0d3b", 00:13:53.825 "is_configured": true, 00:13:53.825 "data_offset": 0, 00:13:53.825 "data_size": 65536 00:13:53.825 } 00:13:53.825 ] 00:13:53.825 }' 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.825 14:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.393 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.394 [2024-11-27 14:13:25.073458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:54.394 "name": "Existed_Raid", 00:13:54.394 "aliases": [ 00:13:54.394 "1f708ffe-0a05-45f1-b859-3a9a43f745bf" 00:13:54.394 ], 00:13:54.394 "product_name": "Raid Volume", 00:13:54.394 "block_size": 512, 00:13:54.394 "num_blocks": 65536, 00:13:54.394 "uuid": "1f708ffe-0a05-45f1-b859-3a9a43f745bf", 00:13:54.394 "assigned_rate_limits": { 00:13:54.394 "rw_ios_per_sec": 0, 00:13:54.394 "rw_mbytes_per_sec": 0, 00:13:54.394 "r_mbytes_per_sec": 0, 00:13:54.394 "w_mbytes_per_sec": 0 00:13:54.394 }, 00:13:54.394 "claimed": false, 00:13:54.394 "zoned": false, 00:13:54.394 "supported_io_types": { 00:13:54.394 "read": true, 00:13:54.394 "write": true, 00:13:54.394 "unmap": false, 00:13:54.394 "flush": false, 00:13:54.394 "reset": true, 00:13:54.394 "nvme_admin": false, 00:13:54.394 "nvme_io": false, 00:13:54.394 "nvme_io_md": false, 00:13:54.394 "write_zeroes": true, 00:13:54.394 "zcopy": false, 00:13:54.394 "get_zone_info": false, 00:13:54.394 "zone_management": false, 00:13:54.394 "zone_append": false, 00:13:54.394 "compare": false, 00:13:54.394 "compare_and_write": false, 00:13:54.394 "abort": false, 00:13:54.394 "seek_hole": false, 00:13:54.394 "seek_data": false, 00:13:54.394 "copy": false, 00:13:54.394 "nvme_iov_md": false 00:13:54.394 }, 00:13:54.394 "memory_domains": [ 00:13:54.394 { 00:13:54.394 "dma_device_id": "system", 00:13:54.394 "dma_device_type": 1 00:13:54.394 }, 00:13:54.394 { 00:13:54.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.394 "dma_device_type": 2 00:13:54.394 }, 00:13:54.394 { 00:13:54.394 "dma_device_id": "system", 00:13:54.394 "dma_device_type": 1 00:13:54.394 }, 00:13:54.394 { 00:13:54.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.394 "dma_device_type": 2 00:13:54.394 }, 00:13:54.394 { 00:13:54.394 "dma_device_id": "system", 00:13:54.394 "dma_device_type": 1 00:13:54.394 }, 00:13:54.394 { 00:13:54.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.394 "dma_device_type": 2 00:13:54.394 } 00:13:54.394 ], 00:13:54.394 "driver_specific": { 00:13:54.394 "raid": { 00:13:54.394 "uuid": "1f708ffe-0a05-45f1-b859-3a9a43f745bf", 00:13:54.394 "strip_size_kb": 0, 00:13:54.394 "state": "online", 00:13:54.394 "raid_level": "raid1", 00:13:54.394 "superblock": false, 00:13:54.394 "num_base_bdevs": 3, 00:13:54.394 "num_base_bdevs_discovered": 3, 00:13:54.394 "num_base_bdevs_operational": 3, 00:13:54.394 "base_bdevs_list": [ 00:13:54.394 { 00:13:54.394 "name": "BaseBdev1", 00:13:54.394 "uuid": "b80a5e59-3fcd-48c3-833e-bf370542bc52", 00:13:54.394 "is_configured": true, 00:13:54.394 "data_offset": 0, 00:13:54.394 "data_size": 65536 00:13:54.394 }, 00:13:54.394 { 00:13:54.394 "name": "BaseBdev2", 00:13:54.394 "uuid": "004d27ed-890c-4012-ba97-ea4a060be079", 00:13:54.394 "is_configured": true, 00:13:54.394 "data_offset": 0, 00:13:54.394 "data_size": 65536 00:13:54.394 }, 00:13:54.394 { 00:13:54.394 "name": "BaseBdev3", 00:13:54.394 "uuid": "e376edde-99b6-4726-a93f-873ea1ba0d3b", 00:13:54.394 "is_configured": true, 00:13:54.394 "data_offset": 0, 00:13:54.394 "data_size": 65536 00:13:54.394 } 00:13:54.394 ] 00:13:54.394 } 00:13:54.394 } 00:13:54.394 }' 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:54.394 BaseBdev2 00:13:54.394 BaseBdev3' 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.394 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.395 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:54.395 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.395 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.395 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.395 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.653 [2024-11-27 14:13:25.380639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.653 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.654 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.654 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.654 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.654 "name": "Existed_Raid", 00:13:54.654 "uuid": "1f708ffe-0a05-45f1-b859-3a9a43f745bf", 00:13:54.654 "strip_size_kb": 0, 00:13:54.654 "state": "online", 00:13:54.654 "raid_level": "raid1", 00:13:54.654 "superblock": false, 00:13:54.654 "num_base_bdevs": 3, 00:13:54.654 "num_base_bdevs_discovered": 2, 00:13:54.654 "num_base_bdevs_operational": 2, 00:13:54.654 "base_bdevs_list": [ 00:13:54.654 { 00:13:54.654 "name": null, 00:13:54.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.654 "is_configured": false, 00:13:54.654 "data_offset": 0, 00:13:54.654 "data_size": 65536 00:13:54.654 }, 00:13:54.654 { 00:13:54.654 "name": "BaseBdev2", 00:13:54.654 "uuid": "004d27ed-890c-4012-ba97-ea4a060be079", 00:13:54.654 "is_configured": true, 00:13:54.654 "data_offset": 0, 00:13:54.654 "data_size": 65536 00:13:54.654 }, 00:13:54.654 { 00:13:54.654 "name": "BaseBdev3", 00:13:54.654 "uuid": "e376edde-99b6-4726-a93f-873ea1ba0d3b", 00:13:54.654 "is_configured": true, 00:13:54.654 "data_offset": 0, 00:13:54.654 "data_size": 65536 00:13:54.654 } 00:13:54.654 ] 00:13:54.654 }' 00:13:54.654 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.654 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.223 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:55.223 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:55.223 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.223 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.223 14:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.223 14:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.223 [2024-11-27 14:13:26.046394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.223 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.482 [2024-11-27 14:13:26.216918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:55.482 [2024-11-27 14:13:26.217037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.482 [2024-11-27 14:13:26.333057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.482 [2024-11-27 14:13:26.333131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.482 [2024-11-27 14:13:26.333147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.482 BaseBdev2 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:55.482 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.741 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:55.741 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.741 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.741 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.742 [ 00:13:55.742 { 00:13:55.742 "name": "BaseBdev2", 00:13:55.742 "aliases": [ 00:13:55.742 "6027d7c3-8716-4e60-93e9-232d4a54caad" 00:13:55.742 ], 00:13:55.742 "product_name": "Malloc disk", 00:13:55.742 "block_size": 512, 00:13:55.742 "num_blocks": 65536, 00:13:55.742 "uuid": "6027d7c3-8716-4e60-93e9-232d4a54caad", 00:13:55.742 "assigned_rate_limits": { 00:13:55.742 "rw_ios_per_sec": 0, 00:13:55.742 "rw_mbytes_per_sec": 0, 00:13:55.742 "r_mbytes_per_sec": 0, 00:13:55.742 "w_mbytes_per_sec": 0 00:13:55.742 }, 00:13:55.742 "claimed": false, 00:13:55.742 "zoned": false, 00:13:55.742 "supported_io_types": { 00:13:55.742 "read": true, 00:13:55.742 "write": true, 00:13:55.742 "unmap": true, 00:13:55.742 "flush": true, 00:13:55.742 "reset": true, 00:13:55.742 "nvme_admin": false, 00:13:55.742 "nvme_io": false, 00:13:55.742 "nvme_io_md": false, 00:13:55.742 "write_zeroes": true, 00:13:55.742 "zcopy": true, 00:13:55.742 "get_zone_info": false, 00:13:55.742 "zone_management": false, 00:13:55.742 "zone_append": false, 00:13:55.742 "compare": false, 00:13:55.742 "compare_and_write": false, 00:13:55.742 "abort": true, 00:13:55.742 "seek_hole": false, 00:13:55.742 "seek_data": false, 00:13:55.742 "copy": true, 00:13:55.742 "nvme_iov_md": false 00:13:55.742 }, 00:13:55.742 "memory_domains": [ 00:13:55.742 { 00:13:55.742 "dma_device_id": "system", 00:13:55.742 "dma_device_type": 1 00:13:55.742 }, 00:13:55.742 { 00:13:55.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.742 "dma_device_type": 2 00:13:55.742 } 00:13:55.742 ], 00:13:55.742 "driver_specific": {} 00:13:55.742 } 00:13:55.742 ] 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.742 BaseBdev3 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.742 [ 00:13:55.742 { 00:13:55.742 "name": "BaseBdev3", 00:13:55.742 "aliases": [ 00:13:55.742 "806a8f2b-ceb4-4b49-9b70-9a9263b4df75" 00:13:55.742 ], 00:13:55.742 "product_name": "Malloc disk", 00:13:55.742 "block_size": 512, 00:13:55.742 "num_blocks": 65536, 00:13:55.742 "uuid": "806a8f2b-ceb4-4b49-9b70-9a9263b4df75", 00:13:55.742 "assigned_rate_limits": { 00:13:55.742 "rw_ios_per_sec": 0, 00:13:55.742 "rw_mbytes_per_sec": 0, 00:13:55.742 "r_mbytes_per_sec": 0, 00:13:55.742 "w_mbytes_per_sec": 0 00:13:55.742 }, 00:13:55.742 "claimed": false, 00:13:55.742 "zoned": false, 00:13:55.742 "supported_io_types": { 00:13:55.742 "read": true, 00:13:55.742 "write": true, 00:13:55.742 "unmap": true, 00:13:55.742 "flush": true, 00:13:55.742 "reset": true, 00:13:55.742 "nvme_admin": false, 00:13:55.742 "nvme_io": false, 00:13:55.742 "nvme_io_md": false, 00:13:55.742 "write_zeroes": true, 00:13:55.742 "zcopy": true, 00:13:55.742 "get_zone_info": false, 00:13:55.742 "zone_management": false, 00:13:55.742 "zone_append": false, 00:13:55.742 "compare": false, 00:13:55.742 "compare_and_write": false, 00:13:55.742 "abort": true, 00:13:55.742 "seek_hole": false, 00:13:55.742 "seek_data": false, 00:13:55.742 "copy": true, 00:13:55.742 "nvme_iov_md": false 00:13:55.742 }, 00:13:55.742 "memory_domains": [ 00:13:55.742 { 00:13:55.742 "dma_device_id": "system", 00:13:55.742 "dma_device_type": 1 00:13:55.742 }, 00:13:55.742 { 00:13:55.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.742 "dma_device_type": 2 00:13:55.742 } 00:13:55.742 ], 00:13:55.742 "driver_specific": {} 00:13:55.742 } 00:13:55.742 ] 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.742 [2024-11-27 14:13:26.569502] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.742 [2024-11-27 14:13:26.569559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.742 [2024-11-27 14:13:26.569583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.742 [2024-11-27 14:13:26.571655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.742 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.742 "name": "Existed_Raid", 00:13:55.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.742 "strip_size_kb": 0, 00:13:55.742 "state": "configuring", 00:13:55.742 "raid_level": "raid1", 00:13:55.742 "superblock": false, 00:13:55.742 "num_base_bdevs": 3, 00:13:55.742 "num_base_bdevs_discovered": 2, 00:13:55.742 "num_base_bdevs_operational": 3, 00:13:55.742 "base_bdevs_list": [ 00:13:55.742 { 00:13:55.742 "name": "BaseBdev1", 00:13:55.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.742 "is_configured": false, 00:13:55.742 "data_offset": 0, 00:13:55.742 "data_size": 0 00:13:55.742 }, 00:13:55.742 { 00:13:55.742 "name": "BaseBdev2", 00:13:55.742 "uuid": "6027d7c3-8716-4e60-93e9-232d4a54caad", 00:13:55.742 "is_configured": true, 00:13:55.742 "data_offset": 0, 00:13:55.742 "data_size": 65536 00:13:55.742 }, 00:13:55.742 { 00:13:55.742 "name": "BaseBdev3", 00:13:55.742 "uuid": "806a8f2b-ceb4-4b49-9b70-9a9263b4df75", 00:13:55.742 "is_configured": true, 00:13:55.742 "data_offset": 0, 00:13:55.742 "data_size": 65536 00:13:55.743 } 00:13:55.743 ] 00:13:55.743 }' 00:13:55.743 14:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.743 14:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.310 [2024-11-27 14:13:27.056735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.310 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.311 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.311 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.311 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.311 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.311 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.311 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.311 "name": "Existed_Raid", 00:13:56.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.311 "strip_size_kb": 0, 00:13:56.311 "state": "configuring", 00:13:56.311 "raid_level": "raid1", 00:13:56.311 "superblock": false, 00:13:56.311 "num_base_bdevs": 3, 00:13:56.311 "num_base_bdevs_discovered": 1, 00:13:56.311 "num_base_bdevs_operational": 3, 00:13:56.311 "base_bdevs_list": [ 00:13:56.311 { 00:13:56.311 "name": "BaseBdev1", 00:13:56.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.311 "is_configured": false, 00:13:56.311 "data_offset": 0, 00:13:56.311 "data_size": 0 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "name": null, 00:13:56.311 "uuid": "6027d7c3-8716-4e60-93e9-232d4a54caad", 00:13:56.311 "is_configured": false, 00:13:56.311 "data_offset": 0, 00:13:56.311 "data_size": 65536 00:13:56.311 }, 00:13:56.311 { 00:13:56.311 "name": "BaseBdev3", 00:13:56.311 "uuid": "806a8f2b-ceb4-4b49-9b70-9a9263b4df75", 00:13:56.311 "is_configured": true, 00:13:56.311 "data_offset": 0, 00:13:56.311 "data_size": 65536 00:13:56.311 } 00:13:56.311 ] 00:13:56.311 }' 00:13:56.311 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.311 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.880 [2024-11-27 14:13:27.652326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.880 BaseBdev1 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.880 [ 00:13:56.880 { 00:13:56.880 "name": "BaseBdev1", 00:13:56.880 "aliases": [ 00:13:56.880 "b01010fd-a8a5-4859-8373-46f4be79c19b" 00:13:56.880 ], 00:13:56.880 "product_name": "Malloc disk", 00:13:56.880 "block_size": 512, 00:13:56.880 "num_blocks": 65536, 00:13:56.880 "uuid": "b01010fd-a8a5-4859-8373-46f4be79c19b", 00:13:56.880 "assigned_rate_limits": { 00:13:56.880 "rw_ios_per_sec": 0, 00:13:56.880 "rw_mbytes_per_sec": 0, 00:13:56.880 "r_mbytes_per_sec": 0, 00:13:56.880 "w_mbytes_per_sec": 0 00:13:56.880 }, 00:13:56.880 "claimed": true, 00:13:56.880 "claim_type": "exclusive_write", 00:13:56.880 "zoned": false, 00:13:56.880 "supported_io_types": { 00:13:56.880 "read": true, 00:13:56.880 "write": true, 00:13:56.880 "unmap": true, 00:13:56.880 "flush": true, 00:13:56.880 "reset": true, 00:13:56.880 "nvme_admin": false, 00:13:56.880 "nvme_io": false, 00:13:56.880 "nvme_io_md": false, 00:13:56.880 "write_zeroes": true, 00:13:56.880 "zcopy": true, 00:13:56.880 "get_zone_info": false, 00:13:56.880 "zone_management": false, 00:13:56.880 "zone_append": false, 00:13:56.880 "compare": false, 00:13:56.880 "compare_and_write": false, 00:13:56.880 "abort": true, 00:13:56.880 "seek_hole": false, 00:13:56.880 "seek_data": false, 00:13:56.880 "copy": true, 00:13:56.880 "nvme_iov_md": false 00:13:56.880 }, 00:13:56.880 "memory_domains": [ 00:13:56.880 { 00:13:56.880 "dma_device_id": "system", 00:13:56.880 "dma_device_type": 1 00:13:56.880 }, 00:13:56.880 { 00:13:56.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.880 "dma_device_type": 2 00:13:56.880 } 00:13:56.880 ], 00:13:56.880 "driver_specific": {} 00:13:56.880 } 00:13:56.880 ] 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.880 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.881 "name": "Existed_Raid", 00:13:56.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.881 "strip_size_kb": 0, 00:13:56.881 "state": "configuring", 00:13:56.881 "raid_level": "raid1", 00:13:56.881 "superblock": false, 00:13:56.881 "num_base_bdevs": 3, 00:13:56.881 "num_base_bdevs_discovered": 2, 00:13:56.881 "num_base_bdevs_operational": 3, 00:13:56.881 "base_bdevs_list": [ 00:13:56.881 { 00:13:56.881 "name": "BaseBdev1", 00:13:56.881 "uuid": "b01010fd-a8a5-4859-8373-46f4be79c19b", 00:13:56.881 "is_configured": true, 00:13:56.881 "data_offset": 0, 00:13:56.881 "data_size": 65536 00:13:56.881 }, 00:13:56.881 { 00:13:56.881 "name": null, 00:13:56.881 "uuid": "6027d7c3-8716-4e60-93e9-232d4a54caad", 00:13:56.881 "is_configured": false, 00:13:56.881 "data_offset": 0, 00:13:56.881 "data_size": 65536 00:13:56.881 }, 00:13:56.881 { 00:13:56.881 "name": "BaseBdev3", 00:13:56.881 "uuid": "806a8f2b-ceb4-4b49-9b70-9a9263b4df75", 00:13:56.881 "is_configured": true, 00:13:56.881 "data_offset": 0, 00:13:56.881 "data_size": 65536 00:13:56.881 } 00:13:56.881 ] 00:13:56.881 }' 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.881 14:13:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.449 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.449 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.449 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:57.449 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.449 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.450 [2024-11-27 14:13:28.172230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.450 "name": "Existed_Raid", 00:13:57.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.450 "strip_size_kb": 0, 00:13:57.450 "state": "configuring", 00:13:57.450 "raid_level": "raid1", 00:13:57.450 "superblock": false, 00:13:57.450 "num_base_bdevs": 3, 00:13:57.450 "num_base_bdevs_discovered": 1, 00:13:57.450 "num_base_bdevs_operational": 3, 00:13:57.450 "base_bdevs_list": [ 00:13:57.450 { 00:13:57.450 "name": "BaseBdev1", 00:13:57.450 "uuid": "b01010fd-a8a5-4859-8373-46f4be79c19b", 00:13:57.450 "is_configured": true, 00:13:57.450 "data_offset": 0, 00:13:57.450 "data_size": 65536 00:13:57.450 }, 00:13:57.450 { 00:13:57.450 "name": null, 00:13:57.450 "uuid": "6027d7c3-8716-4e60-93e9-232d4a54caad", 00:13:57.450 "is_configured": false, 00:13:57.450 "data_offset": 0, 00:13:57.450 "data_size": 65536 00:13:57.450 }, 00:13:57.450 { 00:13:57.450 "name": null, 00:13:57.450 "uuid": "806a8f2b-ceb4-4b49-9b70-9a9263b4df75", 00:13:57.450 "is_configured": false, 00:13:57.450 "data_offset": 0, 00:13:57.450 "data_size": 65536 00:13:57.450 } 00:13:57.450 ] 00:13:57.450 }' 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.450 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.709 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.709 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.709 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.709 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.709 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.968 [2024-11-27 14:13:28.692284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.968 "name": "Existed_Raid", 00:13:57.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.968 "strip_size_kb": 0, 00:13:57.968 "state": "configuring", 00:13:57.968 "raid_level": "raid1", 00:13:57.968 "superblock": false, 00:13:57.968 "num_base_bdevs": 3, 00:13:57.968 "num_base_bdevs_discovered": 2, 00:13:57.968 "num_base_bdevs_operational": 3, 00:13:57.968 "base_bdevs_list": [ 00:13:57.968 { 00:13:57.968 "name": "BaseBdev1", 00:13:57.968 "uuid": "b01010fd-a8a5-4859-8373-46f4be79c19b", 00:13:57.968 "is_configured": true, 00:13:57.968 "data_offset": 0, 00:13:57.968 "data_size": 65536 00:13:57.968 }, 00:13:57.968 { 00:13:57.968 "name": null, 00:13:57.968 "uuid": "6027d7c3-8716-4e60-93e9-232d4a54caad", 00:13:57.968 "is_configured": false, 00:13:57.968 "data_offset": 0, 00:13:57.968 "data_size": 65536 00:13:57.968 }, 00:13:57.968 { 00:13:57.968 "name": "BaseBdev3", 00:13:57.968 "uuid": "806a8f2b-ceb4-4b49-9b70-9a9263b4df75", 00:13:57.968 "is_configured": true, 00:13:57.968 "data_offset": 0, 00:13:57.968 "data_size": 65536 00:13:57.968 } 00:13:57.968 ] 00:13:57.968 }' 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.968 14:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.228 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.228 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.228 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:58.228 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.228 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.487 [2024-11-27 14:13:29.199862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.487 "name": "Existed_Raid", 00:13:58.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.487 "strip_size_kb": 0, 00:13:58.487 "state": "configuring", 00:13:58.487 "raid_level": "raid1", 00:13:58.487 "superblock": false, 00:13:58.487 "num_base_bdevs": 3, 00:13:58.487 "num_base_bdevs_discovered": 1, 00:13:58.487 "num_base_bdevs_operational": 3, 00:13:58.487 "base_bdevs_list": [ 00:13:58.487 { 00:13:58.487 "name": null, 00:13:58.487 "uuid": "b01010fd-a8a5-4859-8373-46f4be79c19b", 00:13:58.487 "is_configured": false, 00:13:58.487 "data_offset": 0, 00:13:58.487 "data_size": 65536 00:13:58.487 }, 00:13:58.487 { 00:13:58.487 "name": null, 00:13:58.487 "uuid": "6027d7c3-8716-4e60-93e9-232d4a54caad", 00:13:58.487 "is_configured": false, 00:13:58.487 "data_offset": 0, 00:13:58.487 "data_size": 65536 00:13:58.487 }, 00:13:58.487 { 00:13:58.487 "name": "BaseBdev3", 00:13:58.487 "uuid": "806a8f2b-ceb4-4b49-9b70-9a9263b4df75", 00:13:58.487 "is_configured": true, 00:13:58.487 "data_offset": 0, 00:13:58.487 "data_size": 65536 00:13:58.487 } 00:13:58.487 ] 00:13:58.487 }' 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.487 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.057 [2024-11-27 14:13:29.818419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.057 "name": "Existed_Raid", 00:13:59.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.057 "strip_size_kb": 0, 00:13:59.057 "state": "configuring", 00:13:59.057 "raid_level": "raid1", 00:13:59.057 "superblock": false, 00:13:59.057 "num_base_bdevs": 3, 00:13:59.057 "num_base_bdevs_discovered": 2, 00:13:59.057 "num_base_bdevs_operational": 3, 00:13:59.057 "base_bdevs_list": [ 00:13:59.057 { 00:13:59.057 "name": null, 00:13:59.057 "uuid": "b01010fd-a8a5-4859-8373-46f4be79c19b", 00:13:59.057 "is_configured": false, 00:13:59.057 "data_offset": 0, 00:13:59.057 "data_size": 65536 00:13:59.057 }, 00:13:59.057 { 00:13:59.057 "name": "BaseBdev2", 00:13:59.057 "uuid": "6027d7c3-8716-4e60-93e9-232d4a54caad", 00:13:59.057 "is_configured": true, 00:13:59.057 "data_offset": 0, 00:13:59.057 "data_size": 65536 00:13:59.057 }, 00:13:59.057 { 00:13:59.057 "name": "BaseBdev3", 00:13:59.057 "uuid": "806a8f2b-ceb4-4b49-9b70-9a9263b4df75", 00:13:59.057 "is_configured": true, 00:13:59.057 "data_offset": 0, 00:13:59.057 "data_size": 65536 00:13:59.057 } 00:13:59.057 ] 00:13:59.057 }' 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.057 14:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.321 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:59.321 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.321 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.321 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.321 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.580 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:59.580 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.580 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.580 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:59.580 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.580 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.580 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b01010fd-a8a5-4859-8373-46f4be79c19b 00:13:59.580 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.580 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.580 [2024-11-27 14:13:30.368838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:59.581 [2024-11-27 14:13:30.368897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:59.581 [2024-11-27 14:13:30.368905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:59.581 [2024-11-27 14:13:30.369182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:59.581 [2024-11-27 14:13:30.369333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:59.581 [2024-11-27 14:13:30.369351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:59.581 [2024-11-27 14:13:30.369600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.581 NewBaseBdev 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.581 [ 00:13:59.581 { 00:13:59.581 "name": "NewBaseBdev", 00:13:59.581 "aliases": [ 00:13:59.581 "b01010fd-a8a5-4859-8373-46f4be79c19b" 00:13:59.581 ], 00:13:59.581 "product_name": "Malloc disk", 00:13:59.581 "block_size": 512, 00:13:59.581 "num_blocks": 65536, 00:13:59.581 "uuid": "b01010fd-a8a5-4859-8373-46f4be79c19b", 00:13:59.581 "assigned_rate_limits": { 00:13:59.581 "rw_ios_per_sec": 0, 00:13:59.581 "rw_mbytes_per_sec": 0, 00:13:59.581 "r_mbytes_per_sec": 0, 00:13:59.581 "w_mbytes_per_sec": 0 00:13:59.581 }, 00:13:59.581 "claimed": true, 00:13:59.581 "claim_type": "exclusive_write", 00:13:59.581 "zoned": false, 00:13:59.581 "supported_io_types": { 00:13:59.581 "read": true, 00:13:59.581 "write": true, 00:13:59.581 "unmap": true, 00:13:59.581 "flush": true, 00:13:59.581 "reset": true, 00:13:59.581 "nvme_admin": false, 00:13:59.581 "nvme_io": false, 00:13:59.581 "nvme_io_md": false, 00:13:59.581 "write_zeroes": true, 00:13:59.581 "zcopy": true, 00:13:59.581 "get_zone_info": false, 00:13:59.581 "zone_management": false, 00:13:59.581 "zone_append": false, 00:13:59.581 "compare": false, 00:13:59.581 "compare_and_write": false, 00:13:59.581 "abort": true, 00:13:59.581 "seek_hole": false, 00:13:59.581 "seek_data": false, 00:13:59.581 "copy": true, 00:13:59.581 "nvme_iov_md": false 00:13:59.581 }, 00:13:59.581 "memory_domains": [ 00:13:59.581 { 00:13:59.581 "dma_device_id": "system", 00:13:59.581 "dma_device_type": 1 00:13:59.581 }, 00:13:59.581 { 00:13:59.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.581 "dma_device_type": 2 00:13:59.581 } 00:13:59.581 ], 00:13:59.581 "driver_specific": {} 00:13:59.581 } 00:13:59.581 ] 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.581 "name": "Existed_Raid", 00:13:59.581 "uuid": "e47af098-f025-4145-81ea-9a5c6bd54d87", 00:13:59.581 "strip_size_kb": 0, 00:13:59.581 "state": "online", 00:13:59.581 "raid_level": "raid1", 00:13:59.581 "superblock": false, 00:13:59.581 "num_base_bdevs": 3, 00:13:59.581 "num_base_bdevs_discovered": 3, 00:13:59.581 "num_base_bdevs_operational": 3, 00:13:59.581 "base_bdevs_list": [ 00:13:59.581 { 00:13:59.581 "name": "NewBaseBdev", 00:13:59.581 "uuid": "b01010fd-a8a5-4859-8373-46f4be79c19b", 00:13:59.581 "is_configured": true, 00:13:59.581 "data_offset": 0, 00:13:59.581 "data_size": 65536 00:13:59.581 }, 00:13:59.581 { 00:13:59.581 "name": "BaseBdev2", 00:13:59.581 "uuid": "6027d7c3-8716-4e60-93e9-232d4a54caad", 00:13:59.581 "is_configured": true, 00:13:59.581 "data_offset": 0, 00:13:59.581 "data_size": 65536 00:13:59.581 }, 00:13:59.581 { 00:13:59.581 "name": "BaseBdev3", 00:13:59.581 "uuid": "806a8f2b-ceb4-4b49-9b70-9a9263b4df75", 00:13:59.581 "is_configured": true, 00:13:59.581 "data_offset": 0, 00:13:59.581 "data_size": 65536 00:13:59.581 } 00:13:59.581 ] 00:13:59.581 }' 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.581 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.151 [2024-11-27 14:13:30.900589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:00.151 "name": "Existed_Raid", 00:14:00.151 "aliases": [ 00:14:00.151 "e47af098-f025-4145-81ea-9a5c6bd54d87" 00:14:00.151 ], 00:14:00.151 "product_name": "Raid Volume", 00:14:00.151 "block_size": 512, 00:14:00.151 "num_blocks": 65536, 00:14:00.151 "uuid": "e47af098-f025-4145-81ea-9a5c6bd54d87", 00:14:00.151 "assigned_rate_limits": { 00:14:00.151 "rw_ios_per_sec": 0, 00:14:00.151 "rw_mbytes_per_sec": 0, 00:14:00.151 "r_mbytes_per_sec": 0, 00:14:00.151 "w_mbytes_per_sec": 0 00:14:00.151 }, 00:14:00.151 "claimed": false, 00:14:00.151 "zoned": false, 00:14:00.151 "supported_io_types": { 00:14:00.151 "read": true, 00:14:00.151 "write": true, 00:14:00.151 "unmap": false, 00:14:00.151 "flush": false, 00:14:00.151 "reset": true, 00:14:00.151 "nvme_admin": false, 00:14:00.151 "nvme_io": false, 00:14:00.151 "nvme_io_md": false, 00:14:00.151 "write_zeroes": true, 00:14:00.151 "zcopy": false, 00:14:00.151 "get_zone_info": false, 00:14:00.151 "zone_management": false, 00:14:00.151 "zone_append": false, 00:14:00.151 "compare": false, 00:14:00.151 "compare_and_write": false, 00:14:00.151 "abort": false, 00:14:00.151 "seek_hole": false, 00:14:00.151 "seek_data": false, 00:14:00.151 "copy": false, 00:14:00.151 "nvme_iov_md": false 00:14:00.151 }, 00:14:00.151 "memory_domains": [ 00:14:00.151 { 00:14:00.151 "dma_device_id": "system", 00:14:00.151 "dma_device_type": 1 00:14:00.151 }, 00:14:00.151 { 00:14:00.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.151 "dma_device_type": 2 00:14:00.151 }, 00:14:00.151 { 00:14:00.151 "dma_device_id": "system", 00:14:00.151 "dma_device_type": 1 00:14:00.151 }, 00:14:00.151 { 00:14:00.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.151 "dma_device_type": 2 00:14:00.151 }, 00:14:00.151 { 00:14:00.151 "dma_device_id": "system", 00:14:00.151 "dma_device_type": 1 00:14:00.151 }, 00:14:00.151 { 00:14:00.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.151 "dma_device_type": 2 00:14:00.151 } 00:14:00.151 ], 00:14:00.151 "driver_specific": { 00:14:00.151 "raid": { 00:14:00.151 "uuid": "e47af098-f025-4145-81ea-9a5c6bd54d87", 00:14:00.151 "strip_size_kb": 0, 00:14:00.151 "state": "online", 00:14:00.151 "raid_level": "raid1", 00:14:00.151 "superblock": false, 00:14:00.151 "num_base_bdevs": 3, 00:14:00.151 "num_base_bdevs_discovered": 3, 00:14:00.151 "num_base_bdevs_operational": 3, 00:14:00.151 "base_bdevs_list": [ 00:14:00.151 { 00:14:00.151 "name": "NewBaseBdev", 00:14:00.151 "uuid": "b01010fd-a8a5-4859-8373-46f4be79c19b", 00:14:00.151 "is_configured": true, 00:14:00.151 "data_offset": 0, 00:14:00.151 "data_size": 65536 00:14:00.151 }, 00:14:00.151 { 00:14:00.151 "name": "BaseBdev2", 00:14:00.151 "uuid": "6027d7c3-8716-4e60-93e9-232d4a54caad", 00:14:00.151 "is_configured": true, 00:14:00.151 "data_offset": 0, 00:14:00.151 "data_size": 65536 00:14:00.151 }, 00:14:00.151 { 00:14:00.151 "name": "BaseBdev3", 00:14:00.151 "uuid": "806a8f2b-ceb4-4b49-9b70-9a9263b4df75", 00:14:00.151 "is_configured": true, 00:14:00.151 "data_offset": 0, 00:14:00.151 "data_size": 65536 00:14:00.151 } 00:14:00.151 ] 00:14:00.151 } 00:14:00.151 } 00:14:00.151 }' 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:00.151 BaseBdev2 00:14:00.151 BaseBdev3' 00:14:00.151 14:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.151 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:00.151 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.151 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.151 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:00.151 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.151 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.151 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.151 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.152 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.152 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.152 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:00.152 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.152 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.152 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.152 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.412 [2024-11-27 14:13:31.164228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.412 [2024-11-27 14:13:31.164268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.412 [2024-11-27 14:13:31.164355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.412 [2024-11-27 14:13:31.164680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.412 [2024-11-27 14:13:31.164702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67605 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67605 ']' 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67605 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67605 00:14:00.412 killing process with pid 67605 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67605' 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67605 00:14:00.412 [2024-11-27 14:13:31.211952] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.412 14:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67605 00:14:00.671 [2024-11-27 14:13:31.584222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:02.050 14:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:02.050 00:14:02.050 real 0m11.498s 00:14:02.050 user 0m18.178s 00:14:02.050 sys 0m1.944s 00:14:02.050 14:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.050 14:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.050 ************************************ 00:14:02.050 END TEST raid_state_function_test 00:14:02.050 ************************************ 00:14:02.309 14:13:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:14:02.309 14:13:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:02.309 14:13:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:02.309 14:13:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:02.309 ************************************ 00:14:02.309 START TEST raid_state_function_test_sb 00:14:02.309 ************************************ 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:02.309 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68237 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68237' 00:14:02.310 Process raid pid: 68237 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68237 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68237 ']' 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.310 14:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.310 [2024-11-27 14:13:33.139138] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:02.310 [2024-11-27 14:13:33.139278] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.569 [2024-11-27 14:13:33.322168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.569 [2024-11-27 14:13:33.460385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.829 [2024-11-27 14:13:33.709841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.829 [2024-11-27 14:13:33.709895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.087 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.087 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:03.087 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:03.087 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.087 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.348 [2024-11-27 14:13:34.044336] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:03.348 [2024-11-27 14:13:34.044398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:03.348 [2024-11-27 14:13:34.044416] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:03.348 [2024-11-27 14:13:34.044429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:03.348 [2024-11-27 14:13:34.044440] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:03.348 [2024-11-27 14:13:34.044451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.348 "name": "Existed_Raid", 00:14:03.348 "uuid": "72af0914-1c4f-4bdb-98f0-1538fa0cb931", 00:14:03.348 "strip_size_kb": 0, 00:14:03.348 "state": "configuring", 00:14:03.348 "raid_level": "raid1", 00:14:03.348 "superblock": true, 00:14:03.348 "num_base_bdevs": 3, 00:14:03.348 "num_base_bdevs_discovered": 0, 00:14:03.348 "num_base_bdevs_operational": 3, 00:14:03.348 "base_bdevs_list": [ 00:14:03.348 { 00:14:03.348 "name": "BaseBdev1", 00:14:03.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.348 "is_configured": false, 00:14:03.348 "data_offset": 0, 00:14:03.348 "data_size": 0 00:14:03.348 }, 00:14:03.348 { 00:14:03.348 "name": "BaseBdev2", 00:14:03.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.348 "is_configured": false, 00:14:03.348 "data_offset": 0, 00:14:03.348 "data_size": 0 00:14:03.348 }, 00:14:03.348 { 00:14:03.348 "name": "BaseBdev3", 00:14:03.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.348 "is_configured": false, 00:14:03.348 "data_offset": 0, 00:14:03.348 "data_size": 0 00:14:03.348 } 00:14:03.348 ] 00:14:03.348 }' 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.348 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.608 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:03.608 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.608 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.608 [2024-11-27 14:13:34.523501] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:03.608 [2024-11-27 14:13:34.523550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:03.608 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.608 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:03.608 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.608 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.608 [2024-11-27 14:13:34.531470] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:03.608 [2024-11-27 14:13:34.531521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:03.608 [2024-11-27 14:13:34.531535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:03.608 [2024-11-27 14:13:34.531550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:03.608 [2024-11-27 14:13:34.531557] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:03.608 [2024-11-27 14:13:34.531567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:03.608 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.608 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:03.608 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.608 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.868 [2024-11-27 14:13:34.579108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.868 BaseBdev1 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.868 [ 00:14:03.868 { 00:14:03.868 "name": "BaseBdev1", 00:14:03.868 "aliases": [ 00:14:03.868 "23e9e8a4-4c71-4d5b-b75a-8bc209a2abc6" 00:14:03.868 ], 00:14:03.868 "product_name": "Malloc disk", 00:14:03.868 "block_size": 512, 00:14:03.868 "num_blocks": 65536, 00:14:03.868 "uuid": "23e9e8a4-4c71-4d5b-b75a-8bc209a2abc6", 00:14:03.868 "assigned_rate_limits": { 00:14:03.868 "rw_ios_per_sec": 0, 00:14:03.868 "rw_mbytes_per_sec": 0, 00:14:03.868 "r_mbytes_per_sec": 0, 00:14:03.868 "w_mbytes_per_sec": 0 00:14:03.868 }, 00:14:03.868 "claimed": true, 00:14:03.868 "claim_type": "exclusive_write", 00:14:03.868 "zoned": false, 00:14:03.868 "supported_io_types": { 00:14:03.868 "read": true, 00:14:03.868 "write": true, 00:14:03.868 "unmap": true, 00:14:03.868 "flush": true, 00:14:03.868 "reset": true, 00:14:03.868 "nvme_admin": false, 00:14:03.868 "nvme_io": false, 00:14:03.868 "nvme_io_md": false, 00:14:03.868 "write_zeroes": true, 00:14:03.868 "zcopy": true, 00:14:03.868 "get_zone_info": false, 00:14:03.868 "zone_management": false, 00:14:03.868 "zone_append": false, 00:14:03.868 "compare": false, 00:14:03.868 "compare_and_write": false, 00:14:03.868 "abort": true, 00:14:03.868 "seek_hole": false, 00:14:03.868 "seek_data": false, 00:14:03.868 "copy": true, 00:14:03.868 "nvme_iov_md": false 00:14:03.868 }, 00:14:03.868 "memory_domains": [ 00:14:03.868 { 00:14:03.868 "dma_device_id": "system", 00:14:03.868 "dma_device_type": 1 00:14:03.868 }, 00:14:03.868 { 00:14:03.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.868 "dma_device_type": 2 00:14:03.868 } 00:14:03.868 ], 00:14:03.868 "driver_specific": {} 00:14:03.868 } 00:14:03.868 ] 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.868 "name": "Existed_Raid", 00:14:03.868 "uuid": "d52ac844-07a9-425c-a5f6-e9e3cc6cd05d", 00:14:03.868 "strip_size_kb": 0, 00:14:03.868 "state": "configuring", 00:14:03.868 "raid_level": "raid1", 00:14:03.868 "superblock": true, 00:14:03.868 "num_base_bdevs": 3, 00:14:03.868 "num_base_bdevs_discovered": 1, 00:14:03.868 "num_base_bdevs_operational": 3, 00:14:03.868 "base_bdevs_list": [ 00:14:03.868 { 00:14:03.868 "name": "BaseBdev1", 00:14:03.868 "uuid": "23e9e8a4-4c71-4d5b-b75a-8bc209a2abc6", 00:14:03.868 "is_configured": true, 00:14:03.868 "data_offset": 2048, 00:14:03.868 "data_size": 63488 00:14:03.868 }, 00:14:03.868 { 00:14:03.868 "name": "BaseBdev2", 00:14:03.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.868 "is_configured": false, 00:14:03.868 "data_offset": 0, 00:14:03.868 "data_size": 0 00:14:03.868 }, 00:14:03.868 { 00:14:03.868 "name": "BaseBdev3", 00:14:03.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.868 "is_configured": false, 00:14:03.868 "data_offset": 0, 00:14:03.868 "data_size": 0 00:14:03.868 } 00:14:03.868 ] 00:14:03.868 }' 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.868 14:13:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.436 [2024-11-27 14:13:35.098288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.436 [2024-11-27 14:13:35.098354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.436 [2024-11-27 14:13:35.110327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.436 [2024-11-27 14:13:35.112447] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.436 [2024-11-27 14:13:35.112493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.436 [2024-11-27 14:13:35.112505] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:04.436 [2024-11-27 14:13:35.112518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.436 "name": "Existed_Raid", 00:14:04.436 "uuid": "88aa1d0b-bece-4928-ba26-f242306557b3", 00:14:04.436 "strip_size_kb": 0, 00:14:04.436 "state": "configuring", 00:14:04.436 "raid_level": "raid1", 00:14:04.436 "superblock": true, 00:14:04.436 "num_base_bdevs": 3, 00:14:04.436 "num_base_bdevs_discovered": 1, 00:14:04.436 "num_base_bdevs_operational": 3, 00:14:04.436 "base_bdevs_list": [ 00:14:04.436 { 00:14:04.436 "name": "BaseBdev1", 00:14:04.436 "uuid": "23e9e8a4-4c71-4d5b-b75a-8bc209a2abc6", 00:14:04.436 "is_configured": true, 00:14:04.436 "data_offset": 2048, 00:14:04.436 "data_size": 63488 00:14:04.436 }, 00:14:04.436 { 00:14:04.436 "name": "BaseBdev2", 00:14:04.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.436 "is_configured": false, 00:14:04.436 "data_offset": 0, 00:14:04.436 "data_size": 0 00:14:04.436 }, 00:14:04.436 { 00:14:04.436 "name": "BaseBdev3", 00:14:04.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.436 "is_configured": false, 00:14:04.436 "data_offset": 0, 00:14:04.436 "data_size": 0 00:14:04.436 } 00:14:04.436 ] 00:14:04.436 }' 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.436 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.695 [2024-11-27 14:13:35.610373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.695 BaseBdev2 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.695 [ 00:14:04.695 { 00:14:04.695 "name": "BaseBdev2", 00:14:04.695 "aliases": [ 00:14:04.695 "649cd733-4aae-49de-8231-75837deccaf4" 00:14:04.695 ], 00:14:04.695 "product_name": "Malloc disk", 00:14:04.695 "block_size": 512, 00:14:04.695 "num_blocks": 65536, 00:14:04.695 "uuid": "649cd733-4aae-49de-8231-75837deccaf4", 00:14:04.695 "assigned_rate_limits": { 00:14:04.695 "rw_ios_per_sec": 0, 00:14:04.695 "rw_mbytes_per_sec": 0, 00:14:04.695 "r_mbytes_per_sec": 0, 00:14:04.695 "w_mbytes_per_sec": 0 00:14:04.695 }, 00:14:04.695 "claimed": true, 00:14:04.695 "claim_type": "exclusive_write", 00:14:04.695 "zoned": false, 00:14:04.695 "supported_io_types": { 00:14:04.695 "read": true, 00:14:04.695 "write": true, 00:14:04.695 "unmap": true, 00:14:04.695 "flush": true, 00:14:04.695 "reset": true, 00:14:04.695 "nvme_admin": false, 00:14:04.695 "nvme_io": false, 00:14:04.695 "nvme_io_md": false, 00:14:04.695 "write_zeroes": true, 00:14:04.695 "zcopy": true, 00:14:04.695 "get_zone_info": false, 00:14:04.695 "zone_management": false, 00:14:04.695 "zone_append": false, 00:14:04.695 "compare": false, 00:14:04.695 "compare_and_write": false, 00:14:04.695 "abort": true, 00:14:04.695 "seek_hole": false, 00:14:04.695 "seek_data": false, 00:14:04.695 "copy": true, 00:14:04.695 "nvme_iov_md": false 00:14:04.695 }, 00:14:04.695 "memory_domains": [ 00:14:04.695 { 00:14:04.695 "dma_device_id": "system", 00:14:04.695 "dma_device_type": 1 00:14:04.695 }, 00:14:04.695 { 00:14:04.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.695 "dma_device_type": 2 00:14:04.695 } 00:14:04.695 ], 00:14:04.695 "driver_specific": {} 00:14:04.695 } 00:14:04.695 ] 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.695 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.954 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.954 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.954 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.954 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.954 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.955 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.955 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.955 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.955 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.955 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.955 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.955 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.955 "name": "Existed_Raid", 00:14:04.955 "uuid": "88aa1d0b-bece-4928-ba26-f242306557b3", 00:14:04.955 "strip_size_kb": 0, 00:14:04.955 "state": "configuring", 00:14:04.955 "raid_level": "raid1", 00:14:04.955 "superblock": true, 00:14:04.955 "num_base_bdevs": 3, 00:14:04.955 "num_base_bdevs_discovered": 2, 00:14:04.955 "num_base_bdevs_operational": 3, 00:14:04.955 "base_bdevs_list": [ 00:14:04.955 { 00:14:04.955 "name": "BaseBdev1", 00:14:04.955 "uuid": "23e9e8a4-4c71-4d5b-b75a-8bc209a2abc6", 00:14:04.955 "is_configured": true, 00:14:04.955 "data_offset": 2048, 00:14:04.955 "data_size": 63488 00:14:04.955 }, 00:14:04.955 { 00:14:04.955 "name": "BaseBdev2", 00:14:04.955 "uuid": "649cd733-4aae-49de-8231-75837deccaf4", 00:14:04.955 "is_configured": true, 00:14:04.955 "data_offset": 2048, 00:14:04.955 "data_size": 63488 00:14:04.955 }, 00:14:04.955 { 00:14:04.955 "name": "BaseBdev3", 00:14:04.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.955 "is_configured": false, 00:14:04.955 "data_offset": 0, 00:14:04.955 "data_size": 0 00:14:04.955 } 00:14:04.955 ] 00:14:04.955 }' 00:14:04.955 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.955 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.214 [2024-11-27 14:13:36.129772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:05.214 [2024-11-27 14:13:36.130089] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:05.214 [2024-11-27 14:13:36.130131] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:05.214 [2024-11-27 14:13:36.130455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:05.214 BaseBdev3 00:14:05.214 [2024-11-27 14:13:36.130666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:05.214 [2024-11-27 14:13:36.130686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:05.214 [2024-11-27 14:13:36.130861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.214 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.214 [ 00:14:05.214 { 00:14:05.214 "name": "BaseBdev3", 00:14:05.214 "aliases": [ 00:14:05.214 "c1c24c9b-3686-4989-8824-f27b7c5be2ce" 00:14:05.214 ], 00:14:05.214 "product_name": "Malloc disk", 00:14:05.214 "block_size": 512, 00:14:05.214 "num_blocks": 65536, 00:14:05.214 "uuid": "c1c24c9b-3686-4989-8824-f27b7c5be2ce", 00:14:05.214 "assigned_rate_limits": { 00:14:05.214 "rw_ios_per_sec": 0, 00:14:05.214 "rw_mbytes_per_sec": 0, 00:14:05.214 "r_mbytes_per_sec": 0, 00:14:05.214 "w_mbytes_per_sec": 0 00:14:05.214 }, 00:14:05.214 "claimed": true, 00:14:05.214 "claim_type": "exclusive_write", 00:14:05.214 "zoned": false, 00:14:05.214 "supported_io_types": { 00:14:05.214 "read": true, 00:14:05.214 "write": true, 00:14:05.214 "unmap": true, 00:14:05.214 "flush": true, 00:14:05.214 "reset": true, 00:14:05.214 "nvme_admin": false, 00:14:05.214 "nvme_io": false, 00:14:05.214 "nvme_io_md": false, 00:14:05.214 "write_zeroes": true, 00:14:05.214 "zcopy": true, 00:14:05.214 "get_zone_info": false, 00:14:05.214 "zone_management": false, 00:14:05.214 "zone_append": false, 00:14:05.214 "compare": false, 00:14:05.214 "compare_and_write": false, 00:14:05.214 "abort": true, 00:14:05.214 "seek_hole": false, 00:14:05.214 "seek_data": false, 00:14:05.214 "copy": true, 00:14:05.214 "nvme_iov_md": false 00:14:05.214 }, 00:14:05.214 "memory_domains": [ 00:14:05.214 { 00:14:05.215 "dma_device_id": "system", 00:14:05.215 "dma_device_type": 1 00:14:05.215 }, 00:14:05.215 { 00:14:05.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.215 "dma_device_type": 2 00:14:05.215 } 00:14:05.215 ], 00:14:05.215 "driver_specific": {} 00:14:05.215 } 00:14:05.215 ] 00:14:05.215 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.215 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:05.215 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:05.215 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:05.215 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:05.215 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.215 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.215 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.215 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.215 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.475 "name": "Existed_Raid", 00:14:05.475 "uuid": "88aa1d0b-bece-4928-ba26-f242306557b3", 00:14:05.475 "strip_size_kb": 0, 00:14:05.475 "state": "online", 00:14:05.475 "raid_level": "raid1", 00:14:05.475 "superblock": true, 00:14:05.475 "num_base_bdevs": 3, 00:14:05.475 "num_base_bdevs_discovered": 3, 00:14:05.475 "num_base_bdevs_operational": 3, 00:14:05.475 "base_bdevs_list": [ 00:14:05.475 { 00:14:05.475 "name": "BaseBdev1", 00:14:05.475 "uuid": "23e9e8a4-4c71-4d5b-b75a-8bc209a2abc6", 00:14:05.475 "is_configured": true, 00:14:05.475 "data_offset": 2048, 00:14:05.475 "data_size": 63488 00:14:05.475 }, 00:14:05.475 { 00:14:05.475 "name": "BaseBdev2", 00:14:05.475 "uuid": "649cd733-4aae-49de-8231-75837deccaf4", 00:14:05.475 "is_configured": true, 00:14:05.475 "data_offset": 2048, 00:14:05.475 "data_size": 63488 00:14:05.475 }, 00:14:05.475 { 00:14:05.475 "name": "BaseBdev3", 00:14:05.475 "uuid": "c1c24c9b-3686-4989-8824-f27b7c5be2ce", 00:14:05.475 "is_configured": true, 00:14:05.475 "data_offset": 2048, 00:14:05.475 "data_size": 63488 00:14:05.475 } 00:14:05.475 ] 00:14:05.475 }' 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.475 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.734 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:05.734 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:05.734 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.734 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.734 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.734 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.734 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:05.734 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.735 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.735 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.735 [2024-11-27 14:13:36.629412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.735 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.735 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.735 "name": "Existed_Raid", 00:14:05.735 "aliases": [ 00:14:05.735 "88aa1d0b-bece-4928-ba26-f242306557b3" 00:14:05.735 ], 00:14:05.735 "product_name": "Raid Volume", 00:14:05.735 "block_size": 512, 00:14:05.735 "num_blocks": 63488, 00:14:05.735 "uuid": "88aa1d0b-bece-4928-ba26-f242306557b3", 00:14:05.735 "assigned_rate_limits": { 00:14:05.735 "rw_ios_per_sec": 0, 00:14:05.735 "rw_mbytes_per_sec": 0, 00:14:05.735 "r_mbytes_per_sec": 0, 00:14:05.735 "w_mbytes_per_sec": 0 00:14:05.735 }, 00:14:05.735 "claimed": false, 00:14:05.735 "zoned": false, 00:14:05.735 "supported_io_types": { 00:14:05.735 "read": true, 00:14:05.735 "write": true, 00:14:05.735 "unmap": false, 00:14:05.735 "flush": false, 00:14:05.735 "reset": true, 00:14:05.735 "nvme_admin": false, 00:14:05.735 "nvme_io": false, 00:14:05.735 "nvme_io_md": false, 00:14:05.735 "write_zeroes": true, 00:14:05.735 "zcopy": false, 00:14:05.735 "get_zone_info": false, 00:14:05.735 "zone_management": false, 00:14:05.735 "zone_append": false, 00:14:05.735 "compare": false, 00:14:05.735 "compare_and_write": false, 00:14:05.735 "abort": false, 00:14:05.735 "seek_hole": false, 00:14:05.735 "seek_data": false, 00:14:05.735 "copy": false, 00:14:05.735 "nvme_iov_md": false 00:14:05.735 }, 00:14:05.735 "memory_domains": [ 00:14:05.735 { 00:14:05.735 "dma_device_id": "system", 00:14:05.735 "dma_device_type": 1 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.735 "dma_device_type": 2 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "dma_device_id": "system", 00:14:05.735 "dma_device_type": 1 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.735 "dma_device_type": 2 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "dma_device_id": "system", 00:14:05.735 "dma_device_type": 1 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.735 "dma_device_type": 2 00:14:05.735 } 00:14:05.735 ], 00:14:05.735 "driver_specific": { 00:14:05.735 "raid": { 00:14:05.735 "uuid": "88aa1d0b-bece-4928-ba26-f242306557b3", 00:14:05.735 "strip_size_kb": 0, 00:14:05.735 "state": "online", 00:14:05.735 "raid_level": "raid1", 00:14:05.735 "superblock": true, 00:14:05.735 "num_base_bdevs": 3, 00:14:05.735 "num_base_bdevs_discovered": 3, 00:14:05.735 "num_base_bdevs_operational": 3, 00:14:05.735 "base_bdevs_list": [ 00:14:05.735 { 00:14:05.735 "name": "BaseBdev1", 00:14:05.735 "uuid": "23e9e8a4-4c71-4d5b-b75a-8bc209a2abc6", 00:14:05.735 "is_configured": true, 00:14:05.735 "data_offset": 2048, 00:14:05.735 "data_size": 63488 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "name": "BaseBdev2", 00:14:05.735 "uuid": "649cd733-4aae-49de-8231-75837deccaf4", 00:14:05.735 "is_configured": true, 00:14:05.735 "data_offset": 2048, 00:14:05.735 "data_size": 63488 00:14:05.735 }, 00:14:05.735 { 00:14:05.735 "name": "BaseBdev3", 00:14:05.735 "uuid": "c1c24c9b-3686-4989-8824-f27b7c5be2ce", 00:14:05.735 "is_configured": true, 00:14:05.735 "data_offset": 2048, 00:14:05.735 "data_size": 63488 00:14:05.735 } 00:14:05.735 ] 00:14:05.735 } 00:14:05.735 } 00:14:05.735 }' 00:14:05.735 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:05.994 BaseBdev2 00:14:05.994 BaseBdev3' 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.994 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.995 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.995 [2024-11-27 14:13:36.900635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.254 "name": "Existed_Raid", 00:14:06.254 "uuid": "88aa1d0b-bece-4928-ba26-f242306557b3", 00:14:06.254 "strip_size_kb": 0, 00:14:06.254 "state": "online", 00:14:06.254 "raid_level": "raid1", 00:14:06.254 "superblock": true, 00:14:06.254 "num_base_bdevs": 3, 00:14:06.254 "num_base_bdevs_discovered": 2, 00:14:06.254 "num_base_bdevs_operational": 2, 00:14:06.254 "base_bdevs_list": [ 00:14:06.254 { 00:14:06.254 "name": null, 00:14:06.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.254 "is_configured": false, 00:14:06.254 "data_offset": 0, 00:14:06.254 "data_size": 63488 00:14:06.254 }, 00:14:06.254 { 00:14:06.254 "name": "BaseBdev2", 00:14:06.254 "uuid": "649cd733-4aae-49de-8231-75837deccaf4", 00:14:06.254 "is_configured": true, 00:14:06.254 "data_offset": 2048, 00:14:06.254 "data_size": 63488 00:14:06.254 }, 00:14:06.254 { 00:14:06.254 "name": "BaseBdev3", 00:14:06.254 "uuid": "c1c24c9b-3686-4989-8824-f27b7c5be2ce", 00:14:06.254 "is_configured": true, 00:14:06.254 "data_offset": 2048, 00:14:06.254 "data_size": 63488 00:14:06.254 } 00:14:06.254 ] 00:14:06.254 }' 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.254 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.514 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:06.514 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.514 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.514 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.514 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.514 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:06.774 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.775 [2024-11-27 14:13:37.521577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.775 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.775 [2024-11-27 14:13:37.679312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:06.775 [2024-11-27 14:13:37.679460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.035 [2024-11-27 14:13:37.785567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.035 [2024-11-27 14:13:37.785643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.035 [2024-11-27 14:13:37.785673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.035 BaseBdev2 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.035 [ 00:14:07.035 { 00:14:07.035 "name": "BaseBdev2", 00:14:07.035 "aliases": [ 00:14:07.035 "f5c6b670-cf54-432e-8785-9852a28550d8" 00:14:07.035 ], 00:14:07.035 "product_name": "Malloc disk", 00:14:07.035 "block_size": 512, 00:14:07.035 "num_blocks": 65536, 00:14:07.035 "uuid": "f5c6b670-cf54-432e-8785-9852a28550d8", 00:14:07.035 "assigned_rate_limits": { 00:14:07.035 "rw_ios_per_sec": 0, 00:14:07.035 "rw_mbytes_per_sec": 0, 00:14:07.035 "r_mbytes_per_sec": 0, 00:14:07.035 "w_mbytes_per_sec": 0 00:14:07.035 }, 00:14:07.035 "claimed": false, 00:14:07.035 "zoned": false, 00:14:07.035 "supported_io_types": { 00:14:07.035 "read": true, 00:14:07.035 "write": true, 00:14:07.035 "unmap": true, 00:14:07.035 "flush": true, 00:14:07.035 "reset": true, 00:14:07.035 "nvme_admin": false, 00:14:07.035 "nvme_io": false, 00:14:07.035 "nvme_io_md": false, 00:14:07.035 "write_zeroes": true, 00:14:07.035 "zcopy": true, 00:14:07.035 "get_zone_info": false, 00:14:07.035 "zone_management": false, 00:14:07.035 "zone_append": false, 00:14:07.035 "compare": false, 00:14:07.035 "compare_and_write": false, 00:14:07.035 "abort": true, 00:14:07.035 "seek_hole": false, 00:14:07.035 "seek_data": false, 00:14:07.035 "copy": true, 00:14:07.035 "nvme_iov_md": false 00:14:07.035 }, 00:14:07.035 "memory_domains": [ 00:14:07.035 { 00:14:07.035 "dma_device_id": "system", 00:14:07.035 "dma_device_type": 1 00:14:07.035 }, 00:14:07.035 { 00:14:07.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.035 "dma_device_type": 2 00:14:07.035 } 00:14:07.035 ], 00:14:07.035 "driver_specific": {} 00:14:07.035 } 00:14:07.035 ] 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.035 BaseBdev3 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.035 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.035 [ 00:14:07.035 { 00:14:07.035 "name": "BaseBdev3", 00:14:07.035 "aliases": [ 00:14:07.035 "401539c1-adf6-45c3-8311-46246908cdfc" 00:14:07.035 ], 00:14:07.035 "product_name": "Malloc disk", 00:14:07.035 "block_size": 512, 00:14:07.035 "num_blocks": 65536, 00:14:07.035 "uuid": "401539c1-adf6-45c3-8311-46246908cdfc", 00:14:07.035 "assigned_rate_limits": { 00:14:07.035 "rw_ios_per_sec": 0, 00:14:07.035 "rw_mbytes_per_sec": 0, 00:14:07.035 "r_mbytes_per_sec": 0, 00:14:07.035 "w_mbytes_per_sec": 0 00:14:07.035 }, 00:14:07.035 "claimed": false, 00:14:07.035 "zoned": false, 00:14:07.035 "supported_io_types": { 00:14:07.035 "read": true, 00:14:07.035 "write": true, 00:14:07.035 "unmap": true, 00:14:07.035 "flush": true, 00:14:07.035 "reset": true, 00:14:07.035 "nvme_admin": false, 00:14:07.035 "nvme_io": false, 00:14:07.035 "nvme_io_md": false, 00:14:07.035 "write_zeroes": true, 00:14:07.035 "zcopy": true, 00:14:07.035 "get_zone_info": false, 00:14:07.035 "zone_management": false, 00:14:07.035 "zone_append": false, 00:14:07.035 "compare": false, 00:14:07.035 "compare_and_write": false, 00:14:07.035 "abort": true, 00:14:07.035 "seek_hole": false, 00:14:07.035 "seek_data": false, 00:14:07.035 "copy": true, 00:14:07.035 "nvme_iov_md": false 00:14:07.036 }, 00:14:07.036 "memory_domains": [ 00:14:07.036 { 00:14:07.036 "dma_device_id": "system", 00:14:07.036 "dma_device_type": 1 00:14:07.036 }, 00:14:07.036 { 00:14:07.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.036 "dma_device_type": 2 00:14:07.036 } 00:14:07.036 ], 00:14:07.036 "driver_specific": {} 00:14:07.036 } 00:14:07.036 ] 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.036 [2024-11-27 14:13:37.982703] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.036 [2024-11-27 14:13:37.982750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.036 [2024-11-27 14:13:37.982774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.036 [2024-11-27 14:13:37.984927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.036 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.296 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.296 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.296 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.296 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.296 14:13:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.296 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.296 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.296 "name": "Existed_Raid", 00:14:07.296 "uuid": "57ebb6ad-1792-4e78-80d7-7ada2f159f28", 00:14:07.296 "strip_size_kb": 0, 00:14:07.296 "state": "configuring", 00:14:07.296 "raid_level": "raid1", 00:14:07.296 "superblock": true, 00:14:07.296 "num_base_bdevs": 3, 00:14:07.296 "num_base_bdevs_discovered": 2, 00:14:07.296 "num_base_bdevs_operational": 3, 00:14:07.296 "base_bdevs_list": [ 00:14:07.296 { 00:14:07.296 "name": "BaseBdev1", 00:14:07.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.296 "is_configured": false, 00:14:07.296 "data_offset": 0, 00:14:07.296 "data_size": 0 00:14:07.296 }, 00:14:07.296 { 00:14:07.296 "name": "BaseBdev2", 00:14:07.296 "uuid": "f5c6b670-cf54-432e-8785-9852a28550d8", 00:14:07.296 "is_configured": true, 00:14:07.296 "data_offset": 2048, 00:14:07.296 "data_size": 63488 00:14:07.296 }, 00:14:07.296 { 00:14:07.296 "name": "BaseBdev3", 00:14:07.296 "uuid": "401539c1-adf6-45c3-8311-46246908cdfc", 00:14:07.296 "is_configured": true, 00:14:07.296 "data_offset": 2048, 00:14:07.296 "data_size": 63488 00:14:07.296 } 00:14:07.296 ] 00:14:07.296 }' 00:14:07.296 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.296 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.556 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:07.556 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.557 [2024-11-27 14:13:38.410077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.557 "name": "Existed_Raid", 00:14:07.557 "uuid": "57ebb6ad-1792-4e78-80d7-7ada2f159f28", 00:14:07.557 "strip_size_kb": 0, 00:14:07.557 "state": "configuring", 00:14:07.557 "raid_level": "raid1", 00:14:07.557 "superblock": true, 00:14:07.557 "num_base_bdevs": 3, 00:14:07.557 "num_base_bdevs_discovered": 1, 00:14:07.557 "num_base_bdevs_operational": 3, 00:14:07.557 "base_bdevs_list": [ 00:14:07.557 { 00:14:07.557 "name": "BaseBdev1", 00:14:07.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.557 "is_configured": false, 00:14:07.557 "data_offset": 0, 00:14:07.557 "data_size": 0 00:14:07.557 }, 00:14:07.557 { 00:14:07.557 "name": null, 00:14:07.557 "uuid": "f5c6b670-cf54-432e-8785-9852a28550d8", 00:14:07.557 "is_configured": false, 00:14:07.557 "data_offset": 0, 00:14:07.557 "data_size": 63488 00:14:07.557 }, 00:14:07.557 { 00:14:07.557 "name": "BaseBdev3", 00:14:07.557 "uuid": "401539c1-adf6-45c3-8311-46246908cdfc", 00:14:07.557 "is_configured": true, 00:14:07.557 "data_offset": 2048, 00:14:07.557 "data_size": 63488 00:14:07.557 } 00:14:07.557 ] 00:14:07.557 }' 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.557 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.126 [2024-11-27 14:13:38.975947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.126 BaseBdev1 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.126 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.126 [ 00:14:08.126 { 00:14:08.126 "name": "BaseBdev1", 00:14:08.126 "aliases": [ 00:14:08.126 "c8172a35-9670-4f9f-a1aa-8018d613dd7a" 00:14:08.126 ], 00:14:08.126 "product_name": "Malloc disk", 00:14:08.126 "block_size": 512, 00:14:08.126 "num_blocks": 65536, 00:14:08.126 "uuid": "c8172a35-9670-4f9f-a1aa-8018d613dd7a", 00:14:08.126 "assigned_rate_limits": { 00:14:08.126 "rw_ios_per_sec": 0, 00:14:08.126 "rw_mbytes_per_sec": 0, 00:14:08.126 "r_mbytes_per_sec": 0, 00:14:08.126 "w_mbytes_per_sec": 0 00:14:08.126 }, 00:14:08.126 "claimed": true, 00:14:08.126 "claim_type": "exclusive_write", 00:14:08.126 "zoned": false, 00:14:08.126 "supported_io_types": { 00:14:08.126 "read": true, 00:14:08.126 "write": true, 00:14:08.126 "unmap": true, 00:14:08.126 "flush": true, 00:14:08.126 "reset": true, 00:14:08.126 "nvme_admin": false, 00:14:08.126 "nvme_io": false, 00:14:08.126 "nvme_io_md": false, 00:14:08.126 "write_zeroes": true, 00:14:08.126 "zcopy": true, 00:14:08.126 "get_zone_info": false, 00:14:08.126 "zone_management": false, 00:14:08.126 "zone_append": false, 00:14:08.126 "compare": false, 00:14:08.126 "compare_and_write": false, 00:14:08.126 "abort": true, 00:14:08.126 "seek_hole": false, 00:14:08.126 "seek_data": false, 00:14:08.126 "copy": true, 00:14:08.127 "nvme_iov_md": false 00:14:08.127 }, 00:14:08.127 "memory_domains": [ 00:14:08.127 { 00:14:08.127 "dma_device_id": "system", 00:14:08.127 "dma_device_type": 1 00:14:08.127 }, 00:14:08.127 { 00:14:08.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.127 "dma_device_type": 2 00:14:08.127 } 00:14:08.127 ], 00:14:08.127 "driver_specific": {} 00:14:08.127 } 00:14:08.127 ] 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.127 "name": "Existed_Raid", 00:14:08.127 "uuid": "57ebb6ad-1792-4e78-80d7-7ada2f159f28", 00:14:08.127 "strip_size_kb": 0, 00:14:08.127 "state": "configuring", 00:14:08.127 "raid_level": "raid1", 00:14:08.127 "superblock": true, 00:14:08.127 "num_base_bdevs": 3, 00:14:08.127 "num_base_bdevs_discovered": 2, 00:14:08.127 "num_base_bdevs_operational": 3, 00:14:08.127 "base_bdevs_list": [ 00:14:08.127 { 00:14:08.127 "name": "BaseBdev1", 00:14:08.127 "uuid": "c8172a35-9670-4f9f-a1aa-8018d613dd7a", 00:14:08.127 "is_configured": true, 00:14:08.127 "data_offset": 2048, 00:14:08.127 "data_size": 63488 00:14:08.127 }, 00:14:08.127 { 00:14:08.127 "name": null, 00:14:08.127 "uuid": "f5c6b670-cf54-432e-8785-9852a28550d8", 00:14:08.127 "is_configured": false, 00:14:08.127 "data_offset": 0, 00:14:08.127 "data_size": 63488 00:14:08.127 }, 00:14:08.127 { 00:14:08.127 "name": "BaseBdev3", 00:14:08.127 "uuid": "401539c1-adf6-45c3-8311-46246908cdfc", 00:14:08.127 "is_configured": true, 00:14:08.127 "data_offset": 2048, 00:14:08.127 "data_size": 63488 00:14:08.127 } 00:14:08.127 ] 00:14:08.127 }' 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.127 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.703 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.703 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.703 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.703 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:08.703 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.703 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:08.703 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:08.703 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.703 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.703 [2024-11-27 14:13:39.559164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:08.703 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.703 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.704 "name": "Existed_Raid", 00:14:08.704 "uuid": "57ebb6ad-1792-4e78-80d7-7ada2f159f28", 00:14:08.704 "strip_size_kb": 0, 00:14:08.704 "state": "configuring", 00:14:08.704 "raid_level": "raid1", 00:14:08.704 "superblock": true, 00:14:08.704 "num_base_bdevs": 3, 00:14:08.704 "num_base_bdevs_discovered": 1, 00:14:08.704 "num_base_bdevs_operational": 3, 00:14:08.704 "base_bdevs_list": [ 00:14:08.704 { 00:14:08.704 "name": "BaseBdev1", 00:14:08.704 "uuid": "c8172a35-9670-4f9f-a1aa-8018d613dd7a", 00:14:08.704 "is_configured": true, 00:14:08.704 "data_offset": 2048, 00:14:08.704 "data_size": 63488 00:14:08.704 }, 00:14:08.704 { 00:14:08.704 "name": null, 00:14:08.704 "uuid": "f5c6b670-cf54-432e-8785-9852a28550d8", 00:14:08.704 "is_configured": false, 00:14:08.704 "data_offset": 0, 00:14:08.704 "data_size": 63488 00:14:08.704 }, 00:14:08.704 { 00:14:08.704 "name": null, 00:14:08.704 "uuid": "401539c1-adf6-45c3-8311-46246908cdfc", 00:14:08.704 "is_configured": false, 00:14:08.704 "data_offset": 0, 00:14:08.704 "data_size": 63488 00:14:08.704 } 00:14:08.704 ] 00:14:08.704 }' 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.704 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.272 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:09.272 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.272 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.272 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.272 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.272 [2024-11-27 14:13:40.010415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.272 "name": "Existed_Raid", 00:14:09.272 "uuid": "57ebb6ad-1792-4e78-80d7-7ada2f159f28", 00:14:09.272 "strip_size_kb": 0, 00:14:09.272 "state": "configuring", 00:14:09.272 "raid_level": "raid1", 00:14:09.272 "superblock": true, 00:14:09.272 "num_base_bdevs": 3, 00:14:09.272 "num_base_bdevs_discovered": 2, 00:14:09.272 "num_base_bdevs_operational": 3, 00:14:09.272 "base_bdevs_list": [ 00:14:09.272 { 00:14:09.272 "name": "BaseBdev1", 00:14:09.272 "uuid": "c8172a35-9670-4f9f-a1aa-8018d613dd7a", 00:14:09.272 "is_configured": true, 00:14:09.272 "data_offset": 2048, 00:14:09.272 "data_size": 63488 00:14:09.272 }, 00:14:09.272 { 00:14:09.272 "name": null, 00:14:09.272 "uuid": "f5c6b670-cf54-432e-8785-9852a28550d8", 00:14:09.272 "is_configured": false, 00:14:09.272 "data_offset": 0, 00:14:09.272 "data_size": 63488 00:14:09.272 }, 00:14:09.272 { 00:14:09.272 "name": "BaseBdev3", 00:14:09.272 "uuid": "401539c1-adf6-45c3-8311-46246908cdfc", 00:14:09.272 "is_configured": true, 00:14:09.272 "data_offset": 2048, 00:14:09.272 "data_size": 63488 00:14:09.272 } 00:14:09.272 ] 00:14:09.272 }' 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.272 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.842 [2024-11-27 14:13:40.557565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.842 "name": "Existed_Raid", 00:14:09.842 "uuid": "57ebb6ad-1792-4e78-80d7-7ada2f159f28", 00:14:09.842 "strip_size_kb": 0, 00:14:09.842 "state": "configuring", 00:14:09.842 "raid_level": "raid1", 00:14:09.842 "superblock": true, 00:14:09.842 "num_base_bdevs": 3, 00:14:09.842 "num_base_bdevs_discovered": 1, 00:14:09.842 "num_base_bdevs_operational": 3, 00:14:09.842 "base_bdevs_list": [ 00:14:09.842 { 00:14:09.842 "name": null, 00:14:09.842 "uuid": "c8172a35-9670-4f9f-a1aa-8018d613dd7a", 00:14:09.842 "is_configured": false, 00:14:09.842 "data_offset": 0, 00:14:09.842 "data_size": 63488 00:14:09.842 }, 00:14:09.842 { 00:14:09.842 "name": null, 00:14:09.842 "uuid": "f5c6b670-cf54-432e-8785-9852a28550d8", 00:14:09.842 "is_configured": false, 00:14:09.842 "data_offset": 0, 00:14:09.842 "data_size": 63488 00:14:09.842 }, 00:14:09.842 { 00:14:09.842 "name": "BaseBdev3", 00:14:09.842 "uuid": "401539c1-adf6-45c3-8311-46246908cdfc", 00:14:09.842 "is_configured": true, 00:14:09.842 "data_offset": 2048, 00:14:09.842 "data_size": 63488 00:14:09.842 } 00:14:09.842 ] 00:14:09.842 }' 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.842 14:13:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.412 [2024-11-27 14:13:41.197321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.412 "name": "Existed_Raid", 00:14:10.412 "uuid": "57ebb6ad-1792-4e78-80d7-7ada2f159f28", 00:14:10.412 "strip_size_kb": 0, 00:14:10.412 "state": "configuring", 00:14:10.412 "raid_level": "raid1", 00:14:10.412 "superblock": true, 00:14:10.412 "num_base_bdevs": 3, 00:14:10.412 "num_base_bdevs_discovered": 2, 00:14:10.412 "num_base_bdevs_operational": 3, 00:14:10.412 "base_bdevs_list": [ 00:14:10.412 { 00:14:10.412 "name": null, 00:14:10.412 "uuid": "c8172a35-9670-4f9f-a1aa-8018d613dd7a", 00:14:10.412 "is_configured": false, 00:14:10.412 "data_offset": 0, 00:14:10.412 "data_size": 63488 00:14:10.412 }, 00:14:10.412 { 00:14:10.412 "name": "BaseBdev2", 00:14:10.412 "uuid": "f5c6b670-cf54-432e-8785-9852a28550d8", 00:14:10.412 "is_configured": true, 00:14:10.412 "data_offset": 2048, 00:14:10.412 "data_size": 63488 00:14:10.412 }, 00:14:10.412 { 00:14:10.412 "name": "BaseBdev3", 00:14:10.412 "uuid": "401539c1-adf6-45c3-8311-46246908cdfc", 00:14:10.412 "is_configured": true, 00:14:10.412 "data_offset": 2048, 00:14:10.412 "data_size": 63488 00:14:10.412 } 00:14:10.412 ] 00:14:10.412 }' 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.412 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.981 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.981 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:10.981 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.981 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.981 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c8172a35-9670-4f9f-a1aa-8018d613dd7a 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 [2024-11-27 14:13:41.787482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:10.982 [2024-11-27 14:13:41.787862] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:10.982 [2024-11-27 14:13:41.787885] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:10.982 [2024-11-27 14:13:41.788251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:10.982 [2024-11-27 14:13:41.788464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:10.982 [2024-11-27 14:13:41.788540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:10.982 NewBaseBdev 00:14:10.982 [2024-11-27 14:13:41.788781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 [ 00:14:10.982 { 00:14:10.982 "name": "NewBaseBdev", 00:14:10.982 "aliases": [ 00:14:10.982 "c8172a35-9670-4f9f-a1aa-8018d613dd7a" 00:14:10.982 ], 00:14:10.982 "product_name": "Malloc disk", 00:14:10.982 "block_size": 512, 00:14:10.982 "num_blocks": 65536, 00:14:10.982 "uuid": "c8172a35-9670-4f9f-a1aa-8018d613dd7a", 00:14:10.982 "assigned_rate_limits": { 00:14:10.982 "rw_ios_per_sec": 0, 00:14:10.982 "rw_mbytes_per_sec": 0, 00:14:10.982 "r_mbytes_per_sec": 0, 00:14:10.982 "w_mbytes_per_sec": 0 00:14:10.982 }, 00:14:10.982 "claimed": true, 00:14:10.982 "claim_type": "exclusive_write", 00:14:10.982 "zoned": false, 00:14:10.982 "supported_io_types": { 00:14:10.982 "read": true, 00:14:10.982 "write": true, 00:14:10.982 "unmap": true, 00:14:10.982 "flush": true, 00:14:10.982 "reset": true, 00:14:10.982 "nvme_admin": false, 00:14:10.982 "nvme_io": false, 00:14:10.982 "nvme_io_md": false, 00:14:10.982 "write_zeroes": true, 00:14:10.982 "zcopy": true, 00:14:10.982 "get_zone_info": false, 00:14:10.982 "zone_management": false, 00:14:10.982 "zone_append": false, 00:14:10.982 "compare": false, 00:14:10.982 "compare_and_write": false, 00:14:10.982 "abort": true, 00:14:10.982 "seek_hole": false, 00:14:10.982 "seek_data": false, 00:14:10.982 "copy": true, 00:14:10.982 "nvme_iov_md": false 00:14:10.982 }, 00:14:10.982 "memory_domains": [ 00:14:10.982 { 00:14:10.982 "dma_device_id": "system", 00:14:10.982 "dma_device_type": 1 00:14:10.982 }, 00:14:10.982 { 00:14:10.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.982 "dma_device_type": 2 00:14:10.982 } 00:14:10.982 ], 00:14:10.982 "driver_specific": {} 00:14:10.982 } 00:14:10.982 ] 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.982 "name": "Existed_Raid", 00:14:10.982 "uuid": "57ebb6ad-1792-4e78-80d7-7ada2f159f28", 00:14:10.982 "strip_size_kb": 0, 00:14:10.982 "state": "online", 00:14:10.982 "raid_level": "raid1", 00:14:10.982 "superblock": true, 00:14:10.982 "num_base_bdevs": 3, 00:14:10.982 "num_base_bdevs_discovered": 3, 00:14:10.982 "num_base_bdevs_operational": 3, 00:14:10.982 "base_bdevs_list": [ 00:14:10.982 { 00:14:10.982 "name": "NewBaseBdev", 00:14:10.982 "uuid": "c8172a35-9670-4f9f-a1aa-8018d613dd7a", 00:14:10.982 "is_configured": true, 00:14:10.982 "data_offset": 2048, 00:14:10.982 "data_size": 63488 00:14:10.982 }, 00:14:10.982 { 00:14:10.982 "name": "BaseBdev2", 00:14:10.982 "uuid": "f5c6b670-cf54-432e-8785-9852a28550d8", 00:14:10.982 "is_configured": true, 00:14:10.982 "data_offset": 2048, 00:14:10.982 "data_size": 63488 00:14:10.982 }, 00:14:10.982 { 00:14:10.982 "name": "BaseBdev3", 00:14:10.982 "uuid": "401539c1-adf6-45c3-8311-46246908cdfc", 00:14:10.982 "is_configured": true, 00:14:10.982 "data_offset": 2048, 00:14:10.982 "data_size": 63488 00:14:10.982 } 00:14:10.982 ] 00:14:10.982 }' 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.982 14:13:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.613 [2024-11-27 14:13:42.275075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.613 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:11.613 "name": "Existed_Raid", 00:14:11.613 "aliases": [ 00:14:11.613 "57ebb6ad-1792-4e78-80d7-7ada2f159f28" 00:14:11.613 ], 00:14:11.613 "product_name": "Raid Volume", 00:14:11.613 "block_size": 512, 00:14:11.613 "num_blocks": 63488, 00:14:11.613 "uuid": "57ebb6ad-1792-4e78-80d7-7ada2f159f28", 00:14:11.613 "assigned_rate_limits": { 00:14:11.613 "rw_ios_per_sec": 0, 00:14:11.613 "rw_mbytes_per_sec": 0, 00:14:11.613 "r_mbytes_per_sec": 0, 00:14:11.613 "w_mbytes_per_sec": 0 00:14:11.613 }, 00:14:11.613 "claimed": false, 00:14:11.613 "zoned": false, 00:14:11.613 "supported_io_types": { 00:14:11.613 "read": true, 00:14:11.613 "write": true, 00:14:11.613 "unmap": false, 00:14:11.613 "flush": false, 00:14:11.613 "reset": true, 00:14:11.613 "nvme_admin": false, 00:14:11.613 "nvme_io": false, 00:14:11.613 "nvme_io_md": false, 00:14:11.613 "write_zeroes": true, 00:14:11.613 "zcopy": false, 00:14:11.613 "get_zone_info": false, 00:14:11.613 "zone_management": false, 00:14:11.613 "zone_append": false, 00:14:11.613 "compare": false, 00:14:11.613 "compare_and_write": false, 00:14:11.613 "abort": false, 00:14:11.613 "seek_hole": false, 00:14:11.613 "seek_data": false, 00:14:11.613 "copy": false, 00:14:11.613 "nvme_iov_md": false 00:14:11.613 }, 00:14:11.613 "memory_domains": [ 00:14:11.613 { 00:14:11.613 "dma_device_id": "system", 00:14:11.613 "dma_device_type": 1 00:14:11.613 }, 00:14:11.613 { 00:14:11.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.613 "dma_device_type": 2 00:14:11.613 }, 00:14:11.613 { 00:14:11.613 "dma_device_id": "system", 00:14:11.613 "dma_device_type": 1 00:14:11.613 }, 00:14:11.613 { 00:14:11.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.613 "dma_device_type": 2 00:14:11.613 }, 00:14:11.613 { 00:14:11.613 "dma_device_id": "system", 00:14:11.613 "dma_device_type": 1 00:14:11.613 }, 00:14:11.613 { 00:14:11.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.613 "dma_device_type": 2 00:14:11.613 } 00:14:11.613 ], 00:14:11.613 "driver_specific": { 00:14:11.613 "raid": { 00:14:11.613 "uuid": "57ebb6ad-1792-4e78-80d7-7ada2f159f28", 00:14:11.614 "strip_size_kb": 0, 00:14:11.614 "state": "online", 00:14:11.614 "raid_level": "raid1", 00:14:11.614 "superblock": true, 00:14:11.614 "num_base_bdevs": 3, 00:14:11.614 "num_base_bdevs_discovered": 3, 00:14:11.614 "num_base_bdevs_operational": 3, 00:14:11.614 "base_bdevs_list": [ 00:14:11.614 { 00:14:11.614 "name": "NewBaseBdev", 00:14:11.614 "uuid": "c8172a35-9670-4f9f-a1aa-8018d613dd7a", 00:14:11.614 "is_configured": true, 00:14:11.614 "data_offset": 2048, 00:14:11.614 "data_size": 63488 00:14:11.614 }, 00:14:11.614 { 00:14:11.614 "name": "BaseBdev2", 00:14:11.614 "uuid": "f5c6b670-cf54-432e-8785-9852a28550d8", 00:14:11.614 "is_configured": true, 00:14:11.614 "data_offset": 2048, 00:14:11.614 "data_size": 63488 00:14:11.614 }, 00:14:11.614 { 00:14:11.614 "name": "BaseBdev3", 00:14:11.614 "uuid": "401539c1-adf6-45c3-8311-46246908cdfc", 00:14:11.614 "is_configured": true, 00:14:11.614 "data_offset": 2048, 00:14:11.614 "data_size": 63488 00:14:11.614 } 00:14:11.614 ] 00:14:11.614 } 00:14:11.614 } 00:14:11.614 }' 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:11.614 BaseBdev2 00:14:11.614 BaseBdev3' 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.614 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.614 [2024-11-27 14:13:42.562265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.614 [2024-11-27 14:13:42.562302] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.614 [2024-11-27 14:13:42.562384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.614 [2024-11-27 14:13:42.562707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.614 [2024-11-27 14:13:42.562722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:11.872 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.872 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68237 00:14:11.872 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68237 ']' 00:14:11.872 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68237 00:14:11.872 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:11.872 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.872 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68237 00:14:11.873 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:11.873 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:11.873 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68237' 00:14:11.873 killing process with pid 68237 00:14:11.873 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68237 00:14:11.873 [2024-11-27 14:13:42.612920] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.873 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68237 00:14:12.132 [2024-11-27 14:13:42.950314] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:13.512 14:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:13.512 00:14:13.512 real 0m11.117s 00:14:13.512 user 0m17.631s 00:14:13.512 sys 0m1.956s 00:14:13.512 14:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:13.512 14:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.512 ************************************ 00:14:13.512 END TEST raid_state_function_test_sb 00:14:13.512 ************************************ 00:14:13.512 14:13:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:13.512 14:13:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:13.512 14:13:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:13.512 14:13:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:13.512 ************************************ 00:14:13.512 START TEST raid_superblock_test 00:14:13.512 ************************************ 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:13.512 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68863 00:14:13.513 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:13.513 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68863 00:14:13.513 14:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68863 ']' 00:14:13.513 14:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.513 14:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.513 14:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.513 14:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.513 14:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.513 [2024-11-27 14:13:44.311422] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:13.513 [2024-11-27 14:13:44.311622] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68863 ] 00:14:13.772 [2024-11-27 14:13:44.488219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.772 [2024-11-27 14:13:44.614443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.032 [2024-11-27 14:13:44.821291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.032 [2024-11-27 14:13:44.821446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.291 malloc1 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.291 [2024-11-27 14:13:45.216610] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:14.291 [2024-11-27 14:13:45.216742] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.291 [2024-11-27 14:13:45.216803] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:14.291 [2024-11-27 14:13:45.216842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.291 [2024-11-27 14:13:45.219066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.291 [2024-11-27 14:13:45.219152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:14.291 pt1 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.291 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.551 malloc2 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.552 [2024-11-27 14:13:45.275694] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:14.552 [2024-11-27 14:13:45.275864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.552 [2024-11-27 14:13:45.275902] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:14.552 [2024-11-27 14:13:45.275913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.552 [2024-11-27 14:13:45.278459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.552 [2024-11-27 14:13:45.278510] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:14.552 pt2 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.552 malloc3 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.552 [2024-11-27 14:13:45.342754] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:14.552 [2024-11-27 14:13:45.342851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.552 [2024-11-27 14:13:45.342890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:14.552 [2024-11-27 14:13:45.342917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.552 [2024-11-27 14:13:45.345049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.552 [2024-11-27 14:13:45.345141] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:14.552 pt3 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.552 [2024-11-27 14:13:45.354767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:14.552 [2024-11-27 14:13:45.356606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:14.552 [2024-11-27 14:13:45.356715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:14.552 [2024-11-27 14:13:45.356919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:14.552 [2024-11-27 14:13:45.356975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:14.552 [2024-11-27 14:13:45.357251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:14.552 [2024-11-27 14:13:45.357478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:14.552 [2024-11-27 14:13:45.357526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:14.552 [2024-11-27 14:13:45.357709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.552 "name": "raid_bdev1", 00:14:14.552 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:14.552 "strip_size_kb": 0, 00:14:14.552 "state": "online", 00:14:14.552 "raid_level": "raid1", 00:14:14.552 "superblock": true, 00:14:14.552 "num_base_bdevs": 3, 00:14:14.552 "num_base_bdevs_discovered": 3, 00:14:14.552 "num_base_bdevs_operational": 3, 00:14:14.552 "base_bdevs_list": [ 00:14:14.552 { 00:14:14.552 "name": "pt1", 00:14:14.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:14.552 "is_configured": true, 00:14:14.552 "data_offset": 2048, 00:14:14.552 "data_size": 63488 00:14:14.552 }, 00:14:14.552 { 00:14:14.552 "name": "pt2", 00:14:14.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.552 "is_configured": true, 00:14:14.552 "data_offset": 2048, 00:14:14.552 "data_size": 63488 00:14:14.552 }, 00:14:14.552 { 00:14:14.552 "name": "pt3", 00:14:14.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.552 "is_configured": true, 00:14:14.552 "data_offset": 2048, 00:14:14.552 "data_size": 63488 00:14:14.552 } 00:14:14.552 ] 00:14:14.552 }' 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.552 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:15.123 [2024-11-27 14:13:45.818374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.123 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:15.123 "name": "raid_bdev1", 00:14:15.123 "aliases": [ 00:14:15.123 "897f2404-ae78-448c-86ef-db36a764034a" 00:14:15.123 ], 00:14:15.123 "product_name": "Raid Volume", 00:14:15.123 "block_size": 512, 00:14:15.123 "num_blocks": 63488, 00:14:15.123 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:15.123 "assigned_rate_limits": { 00:14:15.123 "rw_ios_per_sec": 0, 00:14:15.123 "rw_mbytes_per_sec": 0, 00:14:15.123 "r_mbytes_per_sec": 0, 00:14:15.123 "w_mbytes_per_sec": 0 00:14:15.123 }, 00:14:15.123 "claimed": false, 00:14:15.123 "zoned": false, 00:14:15.123 "supported_io_types": { 00:14:15.123 "read": true, 00:14:15.123 "write": true, 00:14:15.123 "unmap": false, 00:14:15.123 "flush": false, 00:14:15.123 "reset": true, 00:14:15.123 "nvme_admin": false, 00:14:15.123 "nvme_io": false, 00:14:15.123 "nvme_io_md": false, 00:14:15.123 "write_zeroes": true, 00:14:15.123 "zcopy": false, 00:14:15.123 "get_zone_info": false, 00:14:15.123 "zone_management": false, 00:14:15.123 "zone_append": false, 00:14:15.123 "compare": false, 00:14:15.123 "compare_and_write": false, 00:14:15.123 "abort": false, 00:14:15.123 "seek_hole": false, 00:14:15.123 "seek_data": false, 00:14:15.123 "copy": false, 00:14:15.123 "nvme_iov_md": false 00:14:15.123 }, 00:14:15.123 "memory_domains": [ 00:14:15.123 { 00:14:15.123 "dma_device_id": "system", 00:14:15.123 "dma_device_type": 1 00:14:15.123 }, 00:14:15.123 { 00:14:15.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.123 "dma_device_type": 2 00:14:15.123 }, 00:14:15.123 { 00:14:15.123 "dma_device_id": "system", 00:14:15.123 "dma_device_type": 1 00:14:15.123 }, 00:14:15.123 { 00:14:15.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.123 "dma_device_type": 2 00:14:15.123 }, 00:14:15.123 { 00:14:15.123 "dma_device_id": "system", 00:14:15.123 "dma_device_type": 1 00:14:15.123 }, 00:14:15.123 { 00:14:15.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.123 "dma_device_type": 2 00:14:15.123 } 00:14:15.123 ], 00:14:15.123 "driver_specific": { 00:14:15.123 "raid": { 00:14:15.123 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:15.123 "strip_size_kb": 0, 00:14:15.123 "state": "online", 00:14:15.123 "raid_level": "raid1", 00:14:15.123 "superblock": true, 00:14:15.123 "num_base_bdevs": 3, 00:14:15.123 "num_base_bdevs_discovered": 3, 00:14:15.123 "num_base_bdevs_operational": 3, 00:14:15.123 "base_bdevs_list": [ 00:14:15.123 { 00:14:15.123 "name": "pt1", 00:14:15.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.123 "is_configured": true, 00:14:15.123 "data_offset": 2048, 00:14:15.123 "data_size": 63488 00:14:15.123 }, 00:14:15.123 { 00:14:15.123 "name": "pt2", 00:14:15.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.123 "is_configured": true, 00:14:15.123 "data_offset": 2048, 00:14:15.124 "data_size": 63488 00:14:15.124 }, 00:14:15.124 { 00:14:15.124 "name": "pt3", 00:14:15.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.124 "is_configured": true, 00:14:15.124 "data_offset": 2048, 00:14:15.124 "data_size": 63488 00:14:15.124 } 00:14:15.124 ] 00:14:15.124 } 00:14:15.124 } 00:14:15.124 }' 00:14:15.124 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.124 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:15.124 pt2 00:14:15.124 pt3' 00:14:15.124 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.124 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:15.124 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.124 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:15.124 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.124 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.124 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.124 14:13:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.124 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:15.383 [2024-11-27 14:13:46.109796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=897f2404-ae78-448c-86ef-db36a764034a 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 897f2404-ae78-448c-86ef-db36a764034a ']' 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 [2024-11-27 14:13:46.161394] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:15.383 [2024-11-27 14:13:46.161426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.383 [2024-11-27 14:13:46.161531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.383 [2024-11-27 14:13:46.161607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.383 [2024-11-27 14:13:46.161617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 [2024-11-27 14:13:46.309276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:15.383 [2024-11-27 14:13:46.311238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:15.383 [2024-11-27 14:13:46.311306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:15.383 [2024-11-27 14:13:46.311361] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:15.383 [2024-11-27 14:13:46.311418] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:15.383 [2024-11-27 14:13:46.311437] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:15.383 [2024-11-27 14:13:46.311454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:15.383 [2024-11-27 14:13:46.311463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:15.383 request: 00:14:15.383 { 00:14:15.383 "name": "raid_bdev1", 00:14:15.383 "raid_level": "raid1", 00:14:15.383 "base_bdevs": [ 00:14:15.383 "malloc1", 00:14:15.383 "malloc2", 00:14:15.383 "malloc3" 00:14:15.383 ], 00:14:15.383 "superblock": false, 00:14:15.383 "method": "bdev_raid_create", 00:14:15.383 "req_id": 1 00:14:15.383 } 00:14:15.383 Got JSON-RPC error response 00:14:15.383 response: 00:14:15.383 { 00:14:15.383 "code": -17, 00:14:15.383 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:15.383 } 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.383 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.641 [2024-11-27 14:13:46.365098] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:15.641 [2024-11-27 14:13:46.365177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.641 [2024-11-27 14:13:46.365203] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:15.641 [2024-11-27 14:13:46.365213] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.641 [2024-11-27 14:13:46.367476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.641 [2024-11-27 14:13:46.367514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:15.641 [2024-11-27 14:13:46.367607] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:15.641 [2024-11-27 14:13:46.367673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:15.641 pt1 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.641 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.641 "name": "raid_bdev1", 00:14:15.641 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:15.641 "strip_size_kb": 0, 00:14:15.641 "state": "configuring", 00:14:15.641 "raid_level": "raid1", 00:14:15.641 "superblock": true, 00:14:15.641 "num_base_bdevs": 3, 00:14:15.641 "num_base_bdevs_discovered": 1, 00:14:15.641 "num_base_bdevs_operational": 3, 00:14:15.641 "base_bdevs_list": [ 00:14:15.641 { 00:14:15.641 "name": "pt1", 00:14:15.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.642 "is_configured": true, 00:14:15.642 "data_offset": 2048, 00:14:15.642 "data_size": 63488 00:14:15.642 }, 00:14:15.642 { 00:14:15.642 "name": null, 00:14:15.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.642 "is_configured": false, 00:14:15.642 "data_offset": 2048, 00:14:15.642 "data_size": 63488 00:14:15.642 }, 00:14:15.642 { 00:14:15.642 "name": null, 00:14:15.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.642 "is_configured": false, 00:14:15.642 "data_offset": 2048, 00:14:15.642 "data_size": 63488 00:14:15.642 } 00:14:15.642 ] 00:14:15.642 }' 00:14:15.642 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.642 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.918 [2024-11-27 14:13:46.840289] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:15.918 [2024-11-27 14:13:46.840357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.918 [2024-11-27 14:13:46.840379] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:15.918 [2024-11-27 14:13:46.840388] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.918 [2024-11-27 14:13:46.840841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.918 [2024-11-27 14:13:46.840869] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:15.918 [2024-11-27 14:13:46.840956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:15.918 [2024-11-27 14:13:46.840978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:15.918 pt2 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.918 [2024-11-27 14:13:46.852303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.918 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.178 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.178 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.178 "name": "raid_bdev1", 00:14:16.178 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:16.178 "strip_size_kb": 0, 00:14:16.178 "state": "configuring", 00:14:16.178 "raid_level": "raid1", 00:14:16.178 "superblock": true, 00:14:16.178 "num_base_bdevs": 3, 00:14:16.178 "num_base_bdevs_discovered": 1, 00:14:16.178 "num_base_bdevs_operational": 3, 00:14:16.178 "base_bdevs_list": [ 00:14:16.178 { 00:14:16.178 "name": "pt1", 00:14:16.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.178 "is_configured": true, 00:14:16.178 "data_offset": 2048, 00:14:16.178 "data_size": 63488 00:14:16.178 }, 00:14:16.178 { 00:14:16.178 "name": null, 00:14:16.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.178 "is_configured": false, 00:14:16.178 "data_offset": 0, 00:14:16.178 "data_size": 63488 00:14:16.178 }, 00:14:16.178 { 00:14:16.178 "name": null, 00:14:16.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.178 "is_configured": false, 00:14:16.178 "data_offset": 2048, 00:14:16.178 "data_size": 63488 00:14:16.178 } 00:14:16.178 ] 00:14:16.178 }' 00:14:16.178 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.178 14:13:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.437 [2024-11-27 14:13:47.255669] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:16.437 [2024-11-27 14:13:47.255752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.437 [2024-11-27 14:13:47.255772] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:16.437 [2024-11-27 14:13:47.255783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.437 [2024-11-27 14:13:47.256344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.437 [2024-11-27 14:13:47.256379] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:16.437 [2024-11-27 14:13:47.256471] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:16.437 [2024-11-27 14:13:47.256516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:16.437 pt2 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.437 [2024-11-27 14:13:47.267623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:16.437 [2024-11-27 14:13:47.267675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.437 [2024-11-27 14:13:47.267708] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:16.437 [2024-11-27 14:13:47.267718] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.437 [2024-11-27 14:13:47.268130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.437 [2024-11-27 14:13:47.268171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:16.437 [2024-11-27 14:13:47.268243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:16.437 [2024-11-27 14:13:47.268271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:16.437 [2024-11-27 14:13:47.268427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:16.437 [2024-11-27 14:13:47.268450] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:16.437 [2024-11-27 14:13:47.268712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:16.437 [2024-11-27 14:13:47.268894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:16.437 [2024-11-27 14:13:47.268908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:16.437 [2024-11-27 14:13:47.269077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.437 pt3 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.437 "name": "raid_bdev1", 00:14:16.437 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:16.437 "strip_size_kb": 0, 00:14:16.437 "state": "online", 00:14:16.437 "raid_level": "raid1", 00:14:16.437 "superblock": true, 00:14:16.437 "num_base_bdevs": 3, 00:14:16.437 "num_base_bdevs_discovered": 3, 00:14:16.437 "num_base_bdevs_operational": 3, 00:14:16.437 "base_bdevs_list": [ 00:14:16.437 { 00:14:16.437 "name": "pt1", 00:14:16.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.437 "is_configured": true, 00:14:16.437 "data_offset": 2048, 00:14:16.437 "data_size": 63488 00:14:16.437 }, 00:14:16.437 { 00:14:16.437 "name": "pt2", 00:14:16.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.437 "is_configured": true, 00:14:16.437 "data_offset": 2048, 00:14:16.437 "data_size": 63488 00:14:16.437 }, 00:14:16.437 { 00:14:16.437 "name": "pt3", 00:14:16.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.437 "is_configured": true, 00:14:16.437 "data_offset": 2048, 00:14:16.437 "data_size": 63488 00:14:16.437 } 00:14:16.437 ] 00:14:16.437 }' 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.437 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.696 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:16.696 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:16.696 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:16.696 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:16.696 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:16.696 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:16.696 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.696 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.696 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.696 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:16.696 [2024-11-27 14:13:47.627374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.696 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:16.955 "name": "raid_bdev1", 00:14:16.955 "aliases": [ 00:14:16.955 "897f2404-ae78-448c-86ef-db36a764034a" 00:14:16.955 ], 00:14:16.955 "product_name": "Raid Volume", 00:14:16.955 "block_size": 512, 00:14:16.955 "num_blocks": 63488, 00:14:16.955 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:16.955 "assigned_rate_limits": { 00:14:16.955 "rw_ios_per_sec": 0, 00:14:16.955 "rw_mbytes_per_sec": 0, 00:14:16.955 "r_mbytes_per_sec": 0, 00:14:16.955 "w_mbytes_per_sec": 0 00:14:16.955 }, 00:14:16.955 "claimed": false, 00:14:16.955 "zoned": false, 00:14:16.955 "supported_io_types": { 00:14:16.955 "read": true, 00:14:16.955 "write": true, 00:14:16.955 "unmap": false, 00:14:16.955 "flush": false, 00:14:16.955 "reset": true, 00:14:16.955 "nvme_admin": false, 00:14:16.955 "nvme_io": false, 00:14:16.955 "nvme_io_md": false, 00:14:16.955 "write_zeroes": true, 00:14:16.955 "zcopy": false, 00:14:16.955 "get_zone_info": false, 00:14:16.955 "zone_management": false, 00:14:16.955 "zone_append": false, 00:14:16.955 "compare": false, 00:14:16.955 "compare_and_write": false, 00:14:16.955 "abort": false, 00:14:16.955 "seek_hole": false, 00:14:16.955 "seek_data": false, 00:14:16.955 "copy": false, 00:14:16.955 "nvme_iov_md": false 00:14:16.955 }, 00:14:16.955 "memory_domains": [ 00:14:16.955 { 00:14:16.955 "dma_device_id": "system", 00:14:16.955 "dma_device_type": 1 00:14:16.955 }, 00:14:16.955 { 00:14:16.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.955 "dma_device_type": 2 00:14:16.955 }, 00:14:16.955 { 00:14:16.955 "dma_device_id": "system", 00:14:16.955 "dma_device_type": 1 00:14:16.955 }, 00:14:16.955 { 00:14:16.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.955 "dma_device_type": 2 00:14:16.955 }, 00:14:16.955 { 00:14:16.955 "dma_device_id": "system", 00:14:16.955 "dma_device_type": 1 00:14:16.955 }, 00:14:16.955 { 00:14:16.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.955 "dma_device_type": 2 00:14:16.955 } 00:14:16.955 ], 00:14:16.955 "driver_specific": { 00:14:16.955 "raid": { 00:14:16.955 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:16.955 "strip_size_kb": 0, 00:14:16.955 "state": "online", 00:14:16.955 "raid_level": "raid1", 00:14:16.955 "superblock": true, 00:14:16.955 "num_base_bdevs": 3, 00:14:16.955 "num_base_bdevs_discovered": 3, 00:14:16.955 "num_base_bdevs_operational": 3, 00:14:16.955 "base_bdevs_list": [ 00:14:16.955 { 00:14:16.955 "name": "pt1", 00:14:16.955 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.955 "is_configured": true, 00:14:16.955 "data_offset": 2048, 00:14:16.955 "data_size": 63488 00:14:16.955 }, 00:14:16.955 { 00:14:16.955 "name": "pt2", 00:14:16.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:16.955 "is_configured": true, 00:14:16.955 "data_offset": 2048, 00:14:16.955 "data_size": 63488 00:14:16.955 }, 00:14:16.955 { 00:14:16.955 "name": "pt3", 00:14:16.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:16.955 "is_configured": true, 00:14:16.955 "data_offset": 2048, 00:14:16.955 "data_size": 63488 00:14:16.955 } 00:14:16.955 ] 00:14:16.955 } 00:14:16.955 } 00:14:16.955 }' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:16.955 pt2 00:14:16.955 pt3' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.955 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.955 [2024-11-27 14:13:47.898948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 897f2404-ae78-448c-86ef-db36a764034a '!=' 897f2404-ae78-448c-86ef-db36a764034a ']' 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.214 [2024-11-27 14:13:47.946604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.214 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.214 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.214 "name": "raid_bdev1", 00:14:17.214 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:17.214 "strip_size_kb": 0, 00:14:17.214 "state": "online", 00:14:17.214 "raid_level": "raid1", 00:14:17.214 "superblock": true, 00:14:17.214 "num_base_bdevs": 3, 00:14:17.214 "num_base_bdevs_discovered": 2, 00:14:17.214 "num_base_bdevs_operational": 2, 00:14:17.214 "base_bdevs_list": [ 00:14:17.214 { 00:14:17.214 "name": null, 00:14:17.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.214 "is_configured": false, 00:14:17.214 "data_offset": 0, 00:14:17.214 "data_size": 63488 00:14:17.214 }, 00:14:17.214 { 00:14:17.214 "name": "pt2", 00:14:17.214 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.214 "is_configured": true, 00:14:17.214 "data_offset": 2048, 00:14:17.214 "data_size": 63488 00:14:17.214 }, 00:14:17.214 { 00:14:17.214 "name": "pt3", 00:14:17.214 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.214 "is_configured": true, 00:14:17.214 "data_offset": 2048, 00:14:17.214 "data_size": 63488 00:14:17.214 } 00:14:17.214 ] 00:14:17.214 }' 00:14:17.214 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.214 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.472 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.472 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.472 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.730 [2024-11-27 14:13:48.429663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.730 [2024-11-27 14:13:48.429701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.730 [2024-11-27 14:13:48.429800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.730 [2024-11-27 14:13:48.429866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.730 [2024-11-27 14:13:48.429881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.730 [2024-11-27 14:13:48.517459] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:17.730 [2024-11-27 14:13:48.517516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.730 [2024-11-27 14:13:48.517549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:17.730 [2024-11-27 14:13:48.517560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.730 [2024-11-27 14:13:48.519789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.730 [2024-11-27 14:13:48.519832] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:17.730 [2024-11-27 14:13:48.519907] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:17.730 [2024-11-27 14:13:48.519958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:17.730 pt2 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.730 "name": "raid_bdev1", 00:14:17.730 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:17.730 "strip_size_kb": 0, 00:14:17.730 "state": "configuring", 00:14:17.730 "raid_level": "raid1", 00:14:17.730 "superblock": true, 00:14:17.730 "num_base_bdevs": 3, 00:14:17.730 "num_base_bdevs_discovered": 1, 00:14:17.730 "num_base_bdevs_operational": 2, 00:14:17.730 "base_bdevs_list": [ 00:14:17.730 { 00:14:17.730 "name": null, 00:14:17.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.730 "is_configured": false, 00:14:17.730 "data_offset": 2048, 00:14:17.730 "data_size": 63488 00:14:17.730 }, 00:14:17.730 { 00:14:17.730 "name": "pt2", 00:14:17.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.730 "is_configured": true, 00:14:17.730 "data_offset": 2048, 00:14:17.730 "data_size": 63488 00:14:17.730 }, 00:14:17.730 { 00:14:17.730 "name": null, 00:14:17.730 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:17.730 "is_configured": false, 00:14:17.730 "data_offset": 2048, 00:14:17.730 "data_size": 63488 00:14:17.730 } 00:14:17.730 ] 00:14:17.730 }' 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.730 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.988 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:17.988 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:17.988 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:17.988 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:17.988 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.988 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.988 [2024-11-27 14:13:48.940806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:17.988 [2024-11-27 14:13:48.940883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.988 [2024-11-27 14:13:48.940905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:17.988 [2024-11-27 14:13:48.940917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.989 [2024-11-27 14:13:48.941426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.989 [2024-11-27 14:13:48.941465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:17.989 [2024-11-27 14:13:48.941583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:17.989 [2024-11-27 14:13:48.941615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:17.989 [2024-11-27 14:13:48.941738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:17.989 [2024-11-27 14:13:48.941758] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:17.989 [2024-11-27 14:13:48.942038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:17.989 [2024-11-27 14:13:48.942250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:17.989 [2024-11-27 14:13:48.942271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:17.989 [2024-11-27 14:13:48.942424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.248 pt3 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.248 "name": "raid_bdev1", 00:14:18.248 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:18.248 "strip_size_kb": 0, 00:14:18.248 "state": "online", 00:14:18.248 "raid_level": "raid1", 00:14:18.248 "superblock": true, 00:14:18.248 "num_base_bdevs": 3, 00:14:18.248 "num_base_bdevs_discovered": 2, 00:14:18.248 "num_base_bdevs_operational": 2, 00:14:18.248 "base_bdevs_list": [ 00:14:18.248 { 00:14:18.248 "name": null, 00:14:18.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.248 "is_configured": false, 00:14:18.248 "data_offset": 2048, 00:14:18.248 "data_size": 63488 00:14:18.248 }, 00:14:18.248 { 00:14:18.248 "name": "pt2", 00:14:18.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.248 "is_configured": true, 00:14:18.248 "data_offset": 2048, 00:14:18.248 "data_size": 63488 00:14:18.248 }, 00:14:18.248 { 00:14:18.248 "name": "pt3", 00:14:18.248 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.248 "is_configured": true, 00:14:18.248 "data_offset": 2048, 00:14:18.248 "data_size": 63488 00:14:18.248 } 00:14:18.248 ] 00:14:18.248 }' 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.248 14:13:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.506 [2024-11-27 14:13:49.404037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:18.506 [2024-11-27 14:13:49.404141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.506 [2024-11-27 14:13:49.404233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.506 [2024-11-27 14:13:49.404305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.506 [2024-11-27 14:13:49.404323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.506 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.764 [2024-11-27 14:13:49.463948] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:18.764 [2024-11-27 14:13:49.464037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.764 [2024-11-27 14:13:49.464059] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:18.764 [2024-11-27 14:13:49.464078] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.764 [2024-11-27 14:13:49.466467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.764 [2024-11-27 14:13:49.466504] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:18.764 [2024-11-27 14:13:49.466594] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:18.764 [2024-11-27 14:13:49.466641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:18.764 [2024-11-27 14:13:49.466785] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:18.764 [2024-11-27 14:13:49.466814] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:18.764 [2024-11-27 14:13:49.466831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:18.764 [2024-11-27 14:13:49.466895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:18.764 pt1 00:14:18.764 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.764 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:18.764 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:18.764 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.764 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.764 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.764 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.764 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.764 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.764 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.765 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.765 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.765 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.765 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.765 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.765 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.765 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.765 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.765 "name": "raid_bdev1", 00:14:18.765 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:18.765 "strip_size_kb": 0, 00:14:18.765 "state": "configuring", 00:14:18.765 "raid_level": "raid1", 00:14:18.765 "superblock": true, 00:14:18.765 "num_base_bdevs": 3, 00:14:18.765 "num_base_bdevs_discovered": 1, 00:14:18.765 "num_base_bdevs_operational": 2, 00:14:18.765 "base_bdevs_list": [ 00:14:18.765 { 00:14:18.765 "name": null, 00:14:18.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.765 "is_configured": false, 00:14:18.765 "data_offset": 2048, 00:14:18.765 "data_size": 63488 00:14:18.765 }, 00:14:18.765 { 00:14:18.765 "name": "pt2", 00:14:18.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:18.765 "is_configured": true, 00:14:18.765 "data_offset": 2048, 00:14:18.765 "data_size": 63488 00:14:18.765 }, 00:14:18.765 { 00:14:18.765 "name": null, 00:14:18.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:18.765 "is_configured": false, 00:14:18.765 "data_offset": 2048, 00:14:18.765 "data_size": 63488 00:14:18.765 } 00:14:18.765 ] 00:14:18.765 }' 00:14:18.765 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.765 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.024 [2024-11-27 14:13:49.943161] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:19.024 [2024-11-27 14:13:49.943233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.024 [2024-11-27 14:13:49.943257] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:19.024 [2024-11-27 14:13:49.943266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.024 [2024-11-27 14:13:49.943750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.024 [2024-11-27 14:13:49.943776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:19.024 [2024-11-27 14:13:49.943863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:19.024 [2024-11-27 14:13:49.943900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:19.024 [2024-11-27 14:13:49.944024] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:19.024 [2024-11-27 14:13:49.944040] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:19.024 [2024-11-27 14:13:49.944340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:19.024 [2024-11-27 14:13:49.944521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:19.024 [2024-11-27 14:13:49.944546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:19.024 [2024-11-27 14:13:49.944706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.024 pt3 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.024 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.367 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.368 "name": "raid_bdev1", 00:14:19.368 "uuid": "897f2404-ae78-448c-86ef-db36a764034a", 00:14:19.368 "strip_size_kb": 0, 00:14:19.368 "state": "online", 00:14:19.368 "raid_level": "raid1", 00:14:19.368 "superblock": true, 00:14:19.368 "num_base_bdevs": 3, 00:14:19.368 "num_base_bdevs_discovered": 2, 00:14:19.368 "num_base_bdevs_operational": 2, 00:14:19.368 "base_bdevs_list": [ 00:14:19.368 { 00:14:19.368 "name": null, 00:14:19.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.368 "is_configured": false, 00:14:19.368 "data_offset": 2048, 00:14:19.368 "data_size": 63488 00:14:19.368 }, 00:14:19.368 { 00:14:19.368 "name": "pt2", 00:14:19.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:19.368 "is_configured": true, 00:14:19.368 "data_offset": 2048, 00:14:19.368 "data_size": 63488 00:14:19.368 }, 00:14:19.368 { 00:14:19.368 "name": "pt3", 00:14:19.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:19.368 "is_configured": true, 00:14:19.368 "data_offset": 2048, 00:14:19.368 "data_size": 63488 00:14:19.368 } 00:14:19.368 ] 00:14:19.368 }' 00:14:19.368 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.368 14:13:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.628 [2024-11-27 14:13:50.418666] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 897f2404-ae78-448c-86ef-db36a764034a '!=' 897f2404-ae78-448c-86ef-db36a764034a ']' 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68863 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68863 ']' 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68863 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68863 00:14:19.628 killing process with pid 68863 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68863' 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68863 00:14:19.628 [2024-11-27 14:13:50.482664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.628 [2024-11-27 14:13:50.482767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.628 14:13:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68863 00:14:19.628 [2024-11-27 14:13:50.482835] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.628 [2024-11-27 14:13:50.482851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:19.887 [2024-11-27 14:13:50.801087] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.264 14:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:21.264 00:14:21.264 real 0m7.748s 00:14:21.264 user 0m12.095s 00:14:21.264 sys 0m1.365s 00:14:21.264 14:13:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.264 14:13:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.264 ************************************ 00:14:21.264 END TEST raid_superblock_test 00:14:21.264 ************************************ 00:14:21.264 14:13:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:14:21.264 14:13:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:21.264 14:13:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.264 14:13:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.264 ************************************ 00:14:21.264 START TEST raid_read_error_test 00:14:21.264 ************************************ 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:21.264 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1rbcdpE5rf 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69303 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69303 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69303 ']' 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.265 14:13:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.265 [2024-11-27 14:13:52.138705] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:21.265 [2024-11-27 14:13:52.138836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69303 ] 00:14:21.524 [2024-11-27 14:13:52.315919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.524 [2024-11-27 14:13:52.441014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.785 [2024-11-27 14:13:52.657647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.785 [2024-11-27 14:13:52.657688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.354 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 BaseBdev1_malloc 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 true 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 [2024-11-27 14:13:53.096135] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:22.355 [2024-11-27 14:13:53.096189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.355 [2024-11-27 14:13:53.096210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:22.355 [2024-11-27 14:13:53.096221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.355 [2024-11-27 14:13:53.098393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.355 [2024-11-27 14:13:53.098433] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:22.355 BaseBdev1 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 BaseBdev2_malloc 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 true 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 [2024-11-27 14:13:53.164943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:22.355 [2024-11-27 14:13:53.165079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.355 [2024-11-27 14:13:53.165167] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:22.355 [2024-11-27 14:13:53.165224] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.355 [2024-11-27 14:13:53.167593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.355 [2024-11-27 14:13:53.167671] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:22.355 BaseBdev2 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 BaseBdev3_malloc 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 true 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 [2024-11-27 14:13:53.247449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:22.355 [2024-11-27 14:13:53.247506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.355 [2024-11-27 14:13:53.247527] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:22.355 [2024-11-27 14:13:53.247537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.355 [2024-11-27 14:13:53.249813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.355 [2024-11-27 14:13:53.249899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:22.355 BaseBdev3 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 [2024-11-27 14:13:53.259503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.355 [2024-11-27 14:13:53.261537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.355 [2024-11-27 14:13:53.261616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.355 [2024-11-27 14:13:53.261842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:22.355 [2024-11-27 14:13:53.261855] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:22.355 [2024-11-27 14:13:53.262138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:22.355 [2024-11-27 14:13:53.262340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:22.355 [2024-11-27 14:13:53.262353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:22.355 [2024-11-27 14:13:53.262529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.355 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.615 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.615 "name": "raid_bdev1", 00:14:22.615 "uuid": "a4d27383-0546-41f9-89e8-caedab9d8618", 00:14:22.615 "strip_size_kb": 0, 00:14:22.615 "state": "online", 00:14:22.615 "raid_level": "raid1", 00:14:22.615 "superblock": true, 00:14:22.615 "num_base_bdevs": 3, 00:14:22.615 "num_base_bdevs_discovered": 3, 00:14:22.615 "num_base_bdevs_operational": 3, 00:14:22.615 "base_bdevs_list": [ 00:14:22.615 { 00:14:22.615 "name": "BaseBdev1", 00:14:22.615 "uuid": "914d2c78-343f-5a9d-ac23-55d9173b41ce", 00:14:22.615 "is_configured": true, 00:14:22.615 "data_offset": 2048, 00:14:22.615 "data_size": 63488 00:14:22.615 }, 00:14:22.615 { 00:14:22.615 "name": "BaseBdev2", 00:14:22.615 "uuid": "a2959120-2c2f-5858-a59f-f738ed9854b3", 00:14:22.615 "is_configured": true, 00:14:22.615 "data_offset": 2048, 00:14:22.615 "data_size": 63488 00:14:22.615 }, 00:14:22.615 { 00:14:22.615 "name": "BaseBdev3", 00:14:22.615 "uuid": "a05f78d2-5988-5195-8779-1d9a9fb77ffc", 00:14:22.615 "is_configured": true, 00:14:22.615 "data_offset": 2048, 00:14:22.615 "data_size": 63488 00:14:22.615 } 00:14:22.615 ] 00:14:22.615 }' 00:14:22.615 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.615 14:13:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.876 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:22.876 14:13:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:22.876 [2024-11-27 14:13:53.815896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.815 14:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.075 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.075 "name": "raid_bdev1", 00:14:24.075 "uuid": "a4d27383-0546-41f9-89e8-caedab9d8618", 00:14:24.075 "strip_size_kb": 0, 00:14:24.075 "state": "online", 00:14:24.075 "raid_level": "raid1", 00:14:24.075 "superblock": true, 00:14:24.075 "num_base_bdevs": 3, 00:14:24.075 "num_base_bdevs_discovered": 3, 00:14:24.075 "num_base_bdevs_operational": 3, 00:14:24.075 "base_bdevs_list": [ 00:14:24.075 { 00:14:24.075 "name": "BaseBdev1", 00:14:24.075 "uuid": "914d2c78-343f-5a9d-ac23-55d9173b41ce", 00:14:24.075 "is_configured": true, 00:14:24.075 "data_offset": 2048, 00:14:24.075 "data_size": 63488 00:14:24.075 }, 00:14:24.075 { 00:14:24.075 "name": "BaseBdev2", 00:14:24.075 "uuid": "a2959120-2c2f-5858-a59f-f738ed9854b3", 00:14:24.075 "is_configured": true, 00:14:24.075 "data_offset": 2048, 00:14:24.075 "data_size": 63488 00:14:24.075 }, 00:14:24.075 { 00:14:24.075 "name": "BaseBdev3", 00:14:24.075 "uuid": "a05f78d2-5988-5195-8779-1d9a9fb77ffc", 00:14:24.075 "is_configured": true, 00:14:24.075 "data_offset": 2048, 00:14:24.075 "data_size": 63488 00:14:24.075 } 00:14:24.075 ] 00:14:24.075 }' 00:14:24.075 14:13:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.075 14:13:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.336 [2024-11-27 14:13:55.165753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.336 [2024-11-27 14:13:55.165874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.336 [2024-11-27 14:13:55.169236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.336 [2024-11-27 14:13:55.169346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.336 [2024-11-27 14:13:55.169528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.336 [2024-11-27 14:13:55.169582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:24.336 { 00:14:24.336 "results": [ 00:14:24.336 { 00:14:24.336 "job": "raid_bdev1", 00:14:24.336 "core_mask": "0x1", 00:14:24.336 "workload": "randrw", 00:14:24.336 "percentage": 50, 00:14:24.336 "status": "finished", 00:14:24.336 "queue_depth": 1, 00:14:24.336 "io_size": 131072, 00:14:24.336 "runtime": 1.35076, 00:14:24.336 "iops": 11867.39317125174, 00:14:24.336 "mibps": 1483.4241464064676, 00:14:24.336 "io_failed": 0, 00:14:24.336 "io_timeout": 0, 00:14:24.336 "avg_latency_us": 81.27260251656965, 00:14:24.336 "min_latency_us": 25.041048034934498, 00:14:24.336 "max_latency_us": 1659.8637554585152 00:14:24.336 } 00:14:24.336 ], 00:14:24.336 "core_count": 1 00:14:24.336 } 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69303 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69303 ']' 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69303 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69303 00:14:24.336 killing process with pid 69303 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69303' 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69303 00:14:24.336 [2024-11-27 14:13:55.215929] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:24.336 14:13:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69303 00:14:24.595 [2024-11-27 14:13:55.453674] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.971 14:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:25.971 14:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1rbcdpE5rf 00:14:25.971 14:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:25.971 14:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:25.971 14:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:25.971 14:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:25.971 14:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:25.971 14:13:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:25.971 00:14:25.971 real 0m4.651s 00:14:25.971 user 0m5.576s 00:14:25.971 sys 0m0.547s 00:14:25.971 14:13:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.971 14:13:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.971 ************************************ 00:14:25.971 END TEST raid_read_error_test 00:14:25.971 ************************************ 00:14:25.971 14:13:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:14:25.971 14:13:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:25.971 14:13:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.971 14:13:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.971 ************************************ 00:14:25.971 START TEST raid_write_error_test 00:14:25.971 ************************************ 00:14:25.971 14:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XVHxpohVeQ 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69449 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69449 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69449 ']' 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.972 14:13:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.972 [2024-11-27 14:13:56.860026] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:25.972 [2024-11-27 14:13:56.860272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69449 ] 00:14:26.231 [2024-11-27 14:13:57.036558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.231 [2024-11-27 14:13:57.158636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.489 [2024-11-27 14:13:57.366945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.489 [2024-11-27 14:13:57.367000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.057 BaseBdev1_malloc 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.057 true 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.057 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.057 [2024-11-27 14:13:57.780570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:27.057 [2024-11-27 14:13:57.780675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.058 [2024-11-27 14:13:57.780702] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:27.058 [2024-11-27 14:13:57.780713] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.058 [2024-11-27 14:13:57.782990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.058 [2024-11-27 14:13:57.783031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:27.058 BaseBdev1 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 BaseBdev2_malloc 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 true 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 [2024-11-27 14:13:57.849920] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:27.058 [2024-11-27 14:13:57.849994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.058 [2024-11-27 14:13:57.850013] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:27.058 [2024-11-27 14:13:57.850025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.058 [2024-11-27 14:13:57.852378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.058 [2024-11-27 14:13:57.852422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:27.058 BaseBdev2 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 BaseBdev3_malloc 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 true 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 [2024-11-27 14:13:57.931075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:27.058 [2024-11-27 14:13:57.931157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.058 [2024-11-27 14:13:57.931177] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:27.058 [2024-11-27 14:13:57.931187] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.058 [2024-11-27 14:13:57.933358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.058 [2024-11-27 14:13:57.933478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:27.058 BaseBdev3 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 [2024-11-27 14:13:57.943100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.058 [2024-11-27 14:13:57.944909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.058 [2024-11-27 14:13:57.945037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.058 [2024-11-27 14:13:57.945257] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:27.058 [2024-11-27 14:13:57.945271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:27.058 [2024-11-27 14:13:57.945512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:27.058 [2024-11-27 14:13:57.945681] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:27.058 [2024-11-27 14:13:57.945692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:27.058 [2024-11-27 14:13:57.945861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.058 "name": "raid_bdev1", 00:14:27.058 "uuid": "48fc4cb7-433a-4961-bdb8-0333b2d17c41", 00:14:27.058 "strip_size_kb": 0, 00:14:27.058 "state": "online", 00:14:27.058 "raid_level": "raid1", 00:14:27.058 "superblock": true, 00:14:27.058 "num_base_bdevs": 3, 00:14:27.058 "num_base_bdevs_discovered": 3, 00:14:27.058 "num_base_bdevs_operational": 3, 00:14:27.058 "base_bdevs_list": [ 00:14:27.058 { 00:14:27.058 "name": "BaseBdev1", 00:14:27.058 "uuid": "59a5d4cb-676a-5a9b-960f-4da4f2011164", 00:14:27.058 "is_configured": true, 00:14:27.058 "data_offset": 2048, 00:14:27.058 "data_size": 63488 00:14:27.058 }, 00:14:27.058 { 00:14:27.058 "name": "BaseBdev2", 00:14:27.058 "uuid": "4bbfc765-1f7a-5fda-bfe6-987a68c3bc91", 00:14:27.058 "is_configured": true, 00:14:27.058 "data_offset": 2048, 00:14:27.058 "data_size": 63488 00:14:27.058 }, 00:14:27.058 { 00:14:27.058 "name": "BaseBdev3", 00:14:27.058 "uuid": "be08da36-db36-5865-ae4c-6515fafc7843", 00:14:27.058 "is_configured": true, 00:14:27.058 "data_offset": 2048, 00:14:27.058 "data_size": 63488 00:14:27.058 } 00:14:27.058 ] 00:14:27.058 }' 00:14:27.058 14:13:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.058 14:13:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.627 14:13:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:27.627 14:13:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:27.627 [2024-11-27 14:13:58.539546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.566 [2024-11-27 14:13:59.447880] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:28.566 [2024-11-27 14:13:59.448049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:28.566 [2024-11-27 14:13:59.448410] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.566 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.566 "name": "raid_bdev1", 00:14:28.566 "uuid": "48fc4cb7-433a-4961-bdb8-0333b2d17c41", 00:14:28.566 "strip_size_kb": 0, 00:14:28.566 "state": "online", 00:14:28.566 "raid_level": "raid1", 00:14:28.566 "superblock": true, 00:14:28.566 "num_base_bdevs": 3, 00:14:28.566 "num_base_bdevs_discovered": 2, 00:14:28.566 "num_base_bdevs_operational": 2, 00:14:28.566 "base_bdevs_list": [ 00:14:28.566 { 00:14:28.566 "name": null, 00:14:28.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.566 "is_configured": false, 00:14:28.566 "data_offset": 0, 00:14:28.567 "data_size": 63488 00:14:28.567 }, 00:14:28.567 { 00:14:28.567 "name": "BaseBdev2", 00:14:28.567 "uuid": "4bbfc765-1f7a-5fda-bfe6-987a68c3bc91", 00:14:28.567 "is_configured": true, 00:14:28.567 "data_offset": 2048, 00:14:28.567 "data_size": 63488 00:14:28.567 }, 00:14:28.567 { 00:14:28.567 "name": "BaseBdev3", 00:14:28.567 "uuid": "be08da36-db36-5865-ae4c-6515fafc7843", 00:14:28.567 "is_configured": true, 00:14:28.567 "data_offset": 2048, 00:14:28.567 "data_size": 63488 00:14:28.567 } 00:14:28.567 ] 00:14:28.567 }' 00:14:28.567 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.567 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.136 [2024-11-27 14:13:59.926412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.136 [2024-11-27 14:13:59.926454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.136 [2024-11-27 14:13:59.929967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.136 [2024-11-27 14:13:59.930091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.136 [2024-11-27 14:13:59.930258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.136 [2024-11-27 14:13:59.930328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:29.136 { 00:14:29.136 "results": [ 00:14:29.136 { 00:14:29.136 "job": "raid_bdev1", 00:14:29.136 "core_mask": "0x1", 00:14:29.136 "workload": "randrw", 00:14:29.136 "percentage": 50, 00:14:29.136 "status": "finished", 00:14:29.136 "queue_depth": 1, 00:14:29.136 "io_size": 131072, 00:14:29.136 "runtime": 1.387589, 00:14:29.136 "iops": 13364.187810655749, 00:14:29.136 "mibps": 1670.5234763319686, 00:14:29.136 "io_failed": 0, 00:14:29.136 "io_timeout": 0, 00:14:29.136 "avg_latency_us": 71.78500966425656, 00:14:29.136 "min_latency_us": 25.041048034934498, 00:14:29.136 "max_latency_us": 1488.1537117903931 00:14:29.136 } 00:14:29.136 ], 00:14:29.136 "core_count": 1 00:14:29.136 } 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69449 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69449 ']' 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69449 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69449 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69449' 00:14:29.136 killing process with pid 69449 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69449 00:14:29.136 [2024-11-27 14:13:59.976826] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.136 14:13:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69449 00:14:29.396 [2024-11-27 14:14:00.235418] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:30.775 14:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:30.775 14:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XVHxpohVeQ 00:14:30.775 14:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:30.775 14:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:30.775 14:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:30.776 14:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:30.776 14:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:30.776 14:14:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:30.776 00:14:30.776 real 0m4.772s 00:14:30.776 user 0m5.721s 00:14:30.776 sys 0m0.568s 00:14:30.776 14:14:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.776 ************************************ 00:14:30.776 END TEST raid_write_error_test 00:14:30.776 14:14:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.776 ************************************ 00:14:30.776 14:14:01 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:14:30.776 14:14:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:30.776 14:14:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:30.776 14:14:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:30.776 14:14:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.776 14:14:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:30.776 ************************************ 00:14:30.776 START TEST raid_state_function_test 00:14:30.776 ************************************ 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69598 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69598' 00:14:30.776 Process raid pid: 69598 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69598 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69598 ']' 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.776 14:14:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.776 [2024-11-27 14:14:01.697710] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:30.776 [2024-11-27 14:14:01.697914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.036 [2024-11-27 14:14:01.853701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.036 [2024-11-27 14:14:01.980519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.295 [2024-11-27 14:14:02.196644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.295 [2024-11-27 14:14:02.196804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.863 14:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.863 14:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:31.863 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.864 [2024-11-27 14:14:02.570000] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.864 [2024-11-27 14:14:02.570061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.864 [2024-11-27 14:14:02.570073] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.864 [2024-11-27 14:14:02.570082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.864 [2024-11-27 14:14:02.570089] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.864 [2024-11-27 14:14:02.570098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.864 [2024-11-27 14:14:02.570104] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:31.864 [2024-11-27 14:14:02.570112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.864 "name": "Existed_Raid", 00:14:31.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.864 "strip_size_kb": 64, 00:14:31.864 "state": "configuring", 00:14:31.864 "raid_level": "raid0", 00:14:31.864 "superblock": false, 00:14:31.864 "num_base_bdevs": 4, 00:14:31.864 "num_base_bdevs_discovered": 0, 00:14:31.864 "num_base_bdevs_operational": 4, 00:14:31.864 "base_bdevs_list": [ 00:14:31.864 { 00:14:31.864 "name": "BaseBdev1", 00:14:31.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.864 "is_configured": false, 00:14:31.864 "data_offset": 0, 00:14:31.864 "data_size": 0 00:14:31.864 }, 00:14:31.864 { 00:14:31.864 "name": "BaseBdev2", 00:14:31.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.864 "is_configured": false, 00:14:31.864 "data_offset": 0, 00:14:31.864 "data_size": 0 00:14:31.864 }, 00:14:31.864 { 00:14:31.864 "name": "BaseBdev3", 00:14:31.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.864 "is_configured": false, 00:14:31.864 "data_offset": 0, 00:14:31.864 "data_size": 0 00:14:31.864 }, 00:14:31.864 { 00:14:31.864 "name": "BaseBdev4", 00:14:31.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.864 "is_configured": false, 00:14:31.864 "data_offset": 0, 00:14:31.864 "data_size": 0 00:14:31.864 } 00:14:31.864 ] 00:14:31.864 }' 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.864 14:14:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.123 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.123 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.123 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.123 [2024-11-27 14:14:03.029211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.123 [2024-11-27 14:14:03.029333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:32.123 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.124 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:32.124 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.124 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.124 [2024-11-27 14:14:03.041189] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:32.124 [2024-11-27 14:14:03.041236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:32.124 [2024-11-27 14:14:03.041247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.124 [2024-11-27 14:14:03.041257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.124 [2024-11-27 14:14:03.041265] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.124 [2024-11-27 14:14:03.041274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.124 [2024-11-27 14:14:03.041281] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:32.124 [2024-11-27 14:14:03.041291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:32.124 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.124 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:32.124 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.124 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.393 [2024-11-27 14:14:03.093027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.393 BaseBdev1 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.393 [ 00:14:32.393 { 00:14:32.393 "name": "BaseBdev1", 00:14:32.393 "aliases": [ 00:14:32.393 "97f2f5a0-11d2-4868-aeb2-52900b792b3d" 00:14:32.393 ], 00:14:32.393 "product_name": "Malloc disk", 00:14:32.393 "block_size": 512, 00:14:32.393 "num_blocks": 65536, 00:14:32.393 "uuid": "97f2f5a0-11d2-4868-aeb2-52900b792b3d", 00:14:32.393 "assigned_rate_limits": { 00:14:32.393 "rw_ios_per_sec": 0, 00:14:32.393 "rw_mbytes_per_sec": 0, 00:14:32.393 "r_mbytes_per_sec": 0, 00:14:32.393 "w_mbytes_per_sec": 0 00:14:32.393 }, 00:14:32.393 "claimed": true, 00:14:32.393 "claim_type": "exclusive_write", 00:14:32.393 "zoned": false, 00:14:32.393 "supported_io_types": { 00:14:32.393 "read": true, 00:14:32.393 "write": true, 00:14:32.393 "unmap": true, 00:14:32.393 "flush": true, 00:14:32.393 "reset": true, 00:14:32.393 "nvme_admin": false, 00:14:32.393 "nvme_io": false, 00:14:32.393 "nvme_io_md": false, 00:14:32.393 "write_zeroes": true, 00:14:32.393 "zcopy": true, 00:14:32.393 "get_zone_info": false, 00:14:32.393 "zone_management": false, 00:14:32.393 "zone_append": false, 00:14:32.393 "compare": false, 00:14:32.393 "compare_and_write": false, 00:14:32.393 "abort": true, 00:14:32.393 "seek_hole": false, 00:14:32.393 "seek_data": false, 00:14:32.393 "copy": true, 00:14:32.393 "nvme_iov_md": false 00:14:32.393 }, 00:14:32.393 "memory_domains": [ 00:14:32.393 { 00:14:32.393 "dma_device_id": "system", 00:14:32.393 "dma_device_type": 1 00:14:32.393 }, 00:14:32.393 { 00:14:32.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.393 "dma_device_type": 2 00:14:32.393 } 00:14:32.393 ], 00:14:32.393 "driver_specific": {} 00:14:32.393 } 00:14:32.393 ] 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.393 "name": "Existed_Raid", 00:14:32.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.393 "strip_size_kb": 64, 00:14:32.393 "state": "configuring", 00:14:32.393 "raid_level": "raid0", 00:14:32.393 "superblock": false, 00:14:32.393 "num_base_bdevs": 4, 00:14:32.393 "num_base_bdevs_discovered": 1, 00:14:32.393 "num_base_bdevs_operational": 4, 00:14:32.393 "base_bdevs_list": [ 00:14:32.393 { 00:14:32.393 "name": "BaseBdev1", 00:14:32.393 "uuid": "97f2f5a0-11d2-4868-aeb2-52900b792b3d", 00:14:32.393 "is_configured": true, 00:14:32.393 "data_offset": 0, 00:14:32.393 "data_size": 65536 00:14:32.393 }, 00:14:32.393 { 00:14:32.393 "name": "BaseBdev2", 00:14:32.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.393 "is_configured": false, 00:14:32.393 "data_offset": 0, 00:14:32.393 "data_size": 0 00:14:32.393 }, 00:14:32.393 { 00:14:32.393 "name": "BaseBdev3", 00:14:32.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.393 "is_configured": false, 00:14:32.393 "data_offset": 0, 00:14:32.393 "data_size": 0 00:14:32.393 }, 00:14:32.393 { 00:14:32.393 "name": "BaseBdev4", 00:14:32.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.393 "is_configured": false, 00:14:32.393 "data_offset": 0, 00:14:32.393 "data_size": 0 00:14:32.393 } 00:14:32.393 ] 00:14:32.393 }' 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.393 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.962 [2024-11-27 14:14:03.616224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.962 [2024-11-27 14:14:03.616285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.962 [2024-11-27 14:14:03.628306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.962 [2024-11-27 14:14:03.630478] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.962 [2024-11-27 14:14:03.630576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.962 [2024-11-27 14:14:03.630610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.962 [2024-11-27 14:14:03.630638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.962 [2024-11-27 14:14:03.630660] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:32.962 [2024-11-27 14:14:03.630684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.962 "name": "Existed_Raid", 00:14:32.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.962 "strip_size_kb": 64, 00:14:32.962 "state": "configuring", 00:14:32.962 "raid_level": "raid0", 00:14:32.962 "superblock": false, 00:14:32.962 "num_base_bdevs": 4, 00:14:32.962 "num_base_bdevs_discovered": 1, 00:14:32.962 "num_base_bdevs_operational": 4, 00:14:32.962 "base_bdevs_list": [ 00:14:32.962 { 00:14:32.962 "name": "BaseBdev1", 00:14:32.962 "uuid": "97f2f5a0-11d2-4868-aeb2-52900b792b3d", 00:14:32.962 "is_configured": true, 00:14:32.962 "data_offset": 0, 00:14:32.962 "data_size": 65536 00:14:32.962 }, 00:14:32.962 { 00:14:32.962 "name": "BaseBdev2", 00:14:32.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.962 "is_configured": false, 00:14:32.962 "data_offset": 0, 00:14:32.962 "data_size": 0 00:14:32.962 }, 00:14:32.962 { 00:14:32.962 "name": "BaseBdev3", 00:14:32.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.962 "is_configured": false, 00:14:32.962 "data_offset": 0, 00:14:32.962 "data_size": 0 00:14:32.962 }, 00:14:32.962 { 00:14:32.962 "name": "BaseBdev4", 00:14:32.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.962 "is_configured": false, 00:14:32.962 "data_offset": 0, 00:14:32.962 "data_size": 0 00:14:32.962 } 00:14:32.962 ] 00:14:32.962 }' 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.962 14:14:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.221 [2024-11-27 14:14:04.097646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.221 BaseBdev2 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.221 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.221 [ 00:14:33.221 { 00:14:33.221 "name": "BaseBdev2", 00:14:33.221 "aliases": [ 00:14:33.221 "cc8e1fa6-2d94-4bb6-8ee9-ee3b77bed2ae" 00:14:33.221 ], 00:14:33.221 "product_name": "Malloc disk", 00:14:33.221 "block_size": 512, 00:14:33.221 "num_blocks": 65536, 00:14:33.221 "uuid": "cc8e1fa6-2d94-4bb6-8ee9-ee3b77bed2ae", 00:14:33.221 "assigned_rate_limits": { 00:14:33.221 "rw_ios_per_sec": 0, 00:14:33.221 "rw_mbytes_per_sec": 0, 00:14:33.221 "r_mbytes_per_sec": 0, 00:14:33.221 "w_mbytes_per_sec": 0 00:14:33.221 }, 00:14:33.221 "claimed": true, 00:14:33.221 "claim_type": "exclusive_write", 00:14:33.221 "zoned": false, 00:14:33.221 "supported_io_types": { 00:14:33.221 "read": true, 00:14:33.221 "write": true, 00:14:33.221 "unmap": true, 00:14:33.221 "flush": true, 00:14:33.221 "reset": true, 00:14:33.221 "nvme_admin": false, 00:14:33.221 "nvme_io": false, 00:14:33.221 "nvme_io_md": false, 00:14:33.221 "write_zeroes": true, 00:14:33.221 "zcopy": true, 00:14:33.221 "get_zone_info": false, 00:14:33.221 "zone_management": false, 00:14:33.221 "zone_append": false, 00:14:33.221 "compare": false, 00:14:33.221 "compare_and_write": false, 00:14:33.221 "abort": true, 00:14:33.221 "seek_hole": false, 00:14:33.221 "seek_data": false, 00:14:33.222 "copy": true, 00:14:33.222 "nvme_iov_md": false 00:14:33.222 }, 00:14:33.222 "memory_domains": [ 00:14:33.222 { 00:14:33.222 "dma_device_id": "system", 00:14:33.222 "dma_device_type": 1 00:14:33.222 }, 00:14:33.222 { 00:14:33.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.222 "dma_device_type": 2 00:14:33.222 } 00:14:33.222 ], 00:14:33.222 "driver_specific": {} 00:14:33.222 } 00:14:33.222 ] 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.222 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.481 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.481 "name": "Existed_Raid", 00:14:33.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.481 "strip_size_kb": 64, 00:14:33.481 "state": "configuring", 00:14:33.481 "raid_level": "raid0", 00:14:33.481 "superblock": false, 00:14:33.481 "num_base_bdevs": 4, 00:14:33.481 "num_base_bdevs_discovered": 2, 00:14:33.481 "num_base_bdevs_operational": 4, 00:14:33.481 "base_bdevs_list": [ 00:14:33.481 { 00:14:33.481 "name": "BaseBdev1", 00:14:33.481 "uuid": "97f2f5a0-11d2-4868-aeb2-52900b792b3d", 00:14:33.481 "is_configured": true, 00:14:33.481 "data_offset": 0, 00:14:33.481 "data_size": 65536 00:14:33.481 }, 00:14:33.481 { 00:14:33.481 "name": "BaseBdev2", 00:14:33.481 "uuid": "cc8e1fa6-2d94-4bb6-8ee9-ee3b77bed2ae", 00:14:33.481 "is_configured": true, 00:14:33.481 "data_offset": 0, 00:14:33.481 "data_size": 65536 00:14:33.481 }, 00:14:33.481 { 00:14:33.481 "name": "BaseBdev3", 00:14:33.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.481 "is_configured": false, 00:14:33.481 "data_offset": 0, 00:14:33.481 "data_size": 0 00:14:33.481 }, 00:14:33.481 { 00:14:33.481 "name": "BaseBdev4", 00:14:33.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.481 "is_configured": false, 00:14:33.481 "data_offset": 0, 00:14:33.481 "data_size": 0 00:14:33.481 } 00:14:33.481 ] 00:14:33.481 }' 00:14:33.481 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.481 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.740 [2024-11-27 14:14:04.628959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.740 BaseBdev3 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.740 [ 00:14:33.740 { 00:14:33.740 "name": "BaseBdev3", 00:14:33.740 "aliases": [ 00:14:33.740 "bdb16521-dc1e-41ff-9c78-02e12032973c" 00:14:33.740 ], 00:14:33.740 "product_name": "Malloc disk", 00:14:33.740 "block_size": 512, 00:14:33.740 "num_blocks": 65536, 00:14:33.740 "uuid": "bdb16521-dc1e-41ff-9c78-02e12032973c", 00:14:33.740 "assigned_rate_limits": { 00:14:33.740 "rw_ios_per_sec": 0, 00:14:33.740 "rw_mbytes_per_sec": 0, 00:14:33.740 "r_mbytes_per_sec": 0, 00:14:33.740 "w_mbytes_per_sec": 0 00:14:33.740 }, 00:14:33.740 "claimed": true, 00:14:33.740 "claim_type": "exclusive_write", 00:14:33.740 "zoned": false, 00:14:33.740 "supported_io_types": { 00:14:33.740 "read": true, 00:14:33.740 "write": true, 00:14:33.740 "unmap": true, 00:14:33.740 "flush": true, 00:14:33.740 "reset": true, 00:14:33.740 "nvme_admin": false, 00:14:33.740 "nvme_io": false, 00:14:33.740 "nvme_io_md": false, 00:14:33.740 "write_zeroes": true, 00:14:33.740 "zcopy": true, 00:14:33.740 "get_zone_info": false, 00:14:33.740 "zone_management": false, 00:14:33.740 "zone_append": false, 00:14:33.740 "compare": false, 00:14:33.740 "compare_and_write": false, 00:14:33.740 "abort": true, 00:14:33.740 "seek_hole": false, 00:14:33.740 "seek_data": false, 00:14:33.740 "copy": true, 00:14:33.740 "nvme_iov_md": false 00:14:33.740 }, 00:14:33.740 "memory_domains": [ 00:14:33.740 { 00:14:33.740 "dma_device_id": "system", 00:14:33.740 "dma_device_type": 1 00:14:33.740 }, 00:14:33.740 { 00:14:33.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.740 "dma_device_type": 2 00:14:33.740 } 00:14:33.740 ], 00:14:33.740 "driver_specific": {} 00:14:33.740 } 00:14:33.740 ] 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.740 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.999 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.999 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.999 "name": "Existed_Raid", 00:14:33.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.999 "strip_size_kb": 64, 00:14:33.999 "state": "configuring", 00:14:33.999 "raid_level": "raid0", 00:14:33.999 "superblock": false, 00:14:33.999 "num_base_bdevs": 4, 00:14:33.999 "num_base_bdevs_discovered": 3, 00:14:33.999 "num_base_bdevs_operational": 4, 00:14:33.999 "base_bdevs_list": [ 00:14:33.999 { 00:14:33.999 "name": "BaseBdev1", 00:14:33.999 "uuid": "97f2f5a0-11d2-4868-aeb2-52900b792b3d", 00:14:33.999 "is_configured": true, 00:14:33.999 "data_offset": 0, 00:14:33.999 "data_size": 65536 00:14:33.999 }, 00:14:33.999 { 00:14:33.999 "name": "BaseBdev2", 00:14:33.999 "uuid": "cc8e1fa6-2d94-4bb6-8ee9-ee3b77bed2ae", 00:14:33.999 "is_configured": true, 00:14:33.999 "data_offset": 0, 00:14:33.999 "data_size": 65536 00:14:33.999 }, 00:14:33.999 { 00:14:33.999 "name": "BaseBdev3", 00:14:33.999 "uuid": "bdb16521-dc1e-41ff-9c78-02e12032973c", 00:14:33.999 "is_configured": true, 00:14:33.999 "data_offset": 0, 00:14:33.999 "data_size": 65536 00:14:33.999 }, 00:14:33.999 { 00:14:33.999 "name": "BaseBdev4", 00:14:33.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.999 "is_configured": false, 00:14:34.000 "data_offset": 0, 00:14:34.000 "data_size": 0 00:14:34.000 } 00:14:34.000 ] 00:14:34.000 }' 00:14:34.000 14:14:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.000 14:14:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.259 [2024-11-27 14:14:05.162718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:34.259 [2024-11-27 14:14:05.162877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:34.259 [2024-11-27 14:14:05.162908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:34.259 [2024-11-27 14:14:05.163286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:34.259 [2024-11-27 14:14:05.163514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:34.259 [2024-11-27 14:14:05.163564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:34.259 [2024-11-27 14:14:05.163930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.259 BaseBdev4 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.259 [ 00:14:34.259 { 00:14:34.259 "name": "BaseBdev4", 00:14:34.259 "aliases": [ 00:14:34.259 "f5e1e8f8-e79c-4644-9f58-6fcebda245f1" 00:14:34.259 ], 00:14:34.259 "product_name": "Malloc disk", 00:14:34.259 "block_size": 512, 00:14:34.259 "num_blocks": 65536, 00:14:34.259 "uuid": "f5e1e8f8-e79c-4644-9f58-6fcebda245f1", 00:14:34.259 "assigned_rate_limits": { 00:14:34.259 "rw_ios_per_sec": 0, 00:14:34.259 "rw_mbytes_per_sec": 0, 00:14:34.259 "r_mbytes_per_sec": 0, 00:14:34.259 "w_mbytes_per_sec": 0 00:14:34.259 }, 00:14:34.259 "claimed": true, 00:14:34.259 "claim_type": "exclusive_write", 00:14:34.259 "zoned": false, 00:14:34.259 "supported_io_types": { 00:14:34.259 "read": true, 00:14:34.259 "write": true, 00:14:34.259 "unmap": true, 00:14:34.259 "flush": true, 00:14:34.259 "reset": true, 00:14:34.259 "nvme_admin": false, 00:14:34.259 "nvme_io": false, 00:14:34.259 "nvme_io_md": false, 00:14:34.259 "write_zeroes": true, 00:14:34.259 "zcopy": true, 00:14:34.259 "get_zone_info": false, 00:14:34.259 "zone_management": false, 00:14:34.259 "zone_append": false, 00:14:34.259 "compare": false, 00:14:34.259 "compare_and_write": false, 00:14:34.259 "abort": true, 00:14:34.259 "seek_hole": false, 00:14:34.259 "seek_data": false, 00:14:34.259 "copy": true, 00:14:34.259 "nvme_iov_md": false 00:14:34.259 }, 00:14:34.259 "memory_domains": [ 00:14:34.259 { 00:14:34.259 "dma_device_id": "system", 00:14:34.259 "dma_device_type": 1 00:14:34.259 }, 00:14:34.259 { 00:14:34.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.259 "dma_device_type": 2 00:14:34.259 } 00:14:34.259 ], 00:14:34.259 "driver_specific": {} 00:14:34.259 } 00:14:34.259 ] 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.259 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.518 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.519 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.519 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.519 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.519 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.519 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.519 "name": "Existed_Raid", 00:14:34.519 "uuid": "94de2327-e0cb-4a05-9f2e-b21c7874bb19", 00:14:34.519 "strip_size_kb": 64, 00:14:34.519 "state": "online", 00:14:34.519 "raid_level": "raid0", 00:14:34.519 "superblock": false, 00:14:34.519 "num_base_bdevs": 4, 00:14:34.519 "num_base_bdevs_discovered": 4, 00:14:34.519 "num_base_bdevs_operational": 4, 00:14:34.519 "base_bdevs_list": [ 00:14:34.519 { 00:14:34.519 "name": "BaseBdev1", 00:14:34.519 "uuid": "97f2f5a0-11d2-4868-aeb2-52900b792b3d", 00:14:34.519 "is_configured": true, 00:14:34.519 "data_offset": 0, 00:14:34.519 "data_size": 65536 00:14:34.519 }, 00:14:34.519 { 00:14:34.519 "name": "BaseBdev2", 00:14:34.519 "uuid": "cc8e1fa6-2d94-4bb6-8ee9-ee3b77bed2ae", 00:14:34.519 "is_configured": true, 00:14:34.519 "data_offset": 0, 00:14:34.519 "data_size": 65536 00:14:34.519 }, 00:14:34.519 { 00:14:34.519 "name": "BaseBdev3", 00:14:34.519 "uuid": "bdb16521-dc1e-41ff-9c78-02e12032973c", 00:14:34.519 "is_configured": true, 00:14:34.519 "data_offset": 0, 00:14:34.519 "data_size": 65536 00:14:34.519 }, 00:14:34.519 { 00:14:34.519 "name": "BaseBdev4", 00:14:34.519 "uuid": "f5e1e8f8-e79c-4644-9f58-6fcebda245f1", 00:14:34.519 "is_configured": true, 00:14:34.519 "data_offset": 0, 00:14:34.519 "data_size": 65536 00:14:34.519 } 00:14:34.519 ] 00:14:34.519 }' 00:14:34.519 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.519 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.778 [2024-11-27 14:14:05.674342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.778 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:34.778 "name": "Existed_Raid", 00:14:34.778 "aliases": [ 00:14:34.778 "94de2327-e0cb-4a05-9f2e-b21c7874bb19" 00:14:34.778 ], 00:14:34.778 "product_name": "Raid Volume", 00:14:34.778 "block_size": 512, 00:14:34.778 "num_blocks": 262144, 00:14:34.778 "uuid": "94de2327-e0cb-4a05-9f2e-b21c7874bb19", 00:14:34.778 "assigned_rate_limits": { 00:14:34.778 "rw_ios_per_sec": 0, 00:14:34.778 "rw_mbytes_per_sec": 0, 00:14:34.778 "r_mbytes_per_sec": 0, 00:14:34.778 "w_mbytes_per_sec": 0 00:14:34.778 }, 00:14:34.778 "claimed": false, 00:14:34.778 "zoned": false, 00:14:34.778 "supported_io_types": { 00:14:34.778 "read": true, 00:14:34.778 "write": true, 00:14:34.778 "unmap": true, 00:14:34.778 "flush": true, 00:14:34.778 "reset": true, 00:14:34.778 "nvme_admin": false, 00:14:34.778 "nvme_io": false, 00:14:34.778 "nvme_io_md": false, 00:14:34.778 "write_zeroes": true, 00:14:34.778 "zcopy": false, 00:14:34.778 "get_zone_info": false, 00:14:34.778 "zone_management": false, 00:14:34.778 "zone_append": false, 00:14:34.778 "compare": false, 00:14:34.778 "compare_and_write": false, 00:14:34.778 "abort": false, 00:14:34.778 "seek_hole": false, 00:14:34.778 "seek_data": false, 00:14:34.778 "copy": false, 00:14:34.778 "nvme_iov_md": false 00:14:34.778 }, 00:14:34.779 "memory_domains": [ 00:14:34.779 { 00:14:34.779 "dma_device_id": "system", 00:14:34.779 "dma_device_type": 1 00:14:34.779 }, 00:14:34.779 { 00:14:34.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.779 "dma_device_type": 2 00:14:34.779 }, 00:14:34.779 { 00:14:34.779 "dma_device_id": "system", 00:14:34.779 "dma_device_type": 1 00:14:34.779 }, 00:14:34.779 { 00:14:34.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.779 "dma_device_type": 2 00:14:34.779 }, 00:14:34.779 { 00:14:34.779 "dma_device_id": "system", 00:14:34.779 "dma_device_type": 1 00:14:34.779 }, 00:14:34.779 { 00:14:34.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.779 "dma_device_type": 2 00:14:34.779 }, 00:14:34.779 { 00:14:34.779 "dma_device_id": "system", 00:14:34.779 "dma_device_type": 1 00:14:34.779 }, 00:14:34.779 { 00:14:34.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.779 "dma_device_type": 2 00:14:34.779 } 00:14:34.779 ], 00:14:34.779 "driver_specific": { 00:14:34.779 "raid": { 00:14:34.779 "uuid": "94de2327-e0cb-4a05-9f2e-b21c7874bb19", 00:14:34.779 "strip_size_kb": 64, 00:14:34.779 "state": "online", 00:14:34.779 "raid_level": "raid0", 00:14:34.779 "superblock": false, 00:14:34.779 "num_base_bdevs": 4, 00:14:34.779 "num_base_bdevs_discovered": 4, 00:14:34.779 "num_base_bdevs_operational": 4, 00:14:34.779 "base_bdevs_list": [ 00:14:34.779 { 00:14:34.779 "name": "BaseBdev1", 00:14:34.779 "uuid": "97f2f5a0-11d2-4868-aeb2-52900b792b3d", 00:14:34.779 "is_configured": true, 00:14:34.779 "data_offset": 0, 00:14:34.779 "data_size": 65536 00:14:34.779 }, 00:14:34.779 { 00:14:34.779 "name": "BaseBdev2", 00:14:34.779 "uuid": "cc8e1fa6-2d94-4bb6-8ee9-ee3b77bed2ae", 00:14:34.779 "is_configured": true, 00:14:34.779 "data_offset": 0, 00:14:34.779 "data_size": 65536 00:14:34.779 }, 00:14:34.779 { 00:14:34.779 "name": "BaseBdev3", 00:14:34.779 "uuid": "bdb16521-dc1e-41ff-9c78-02e12032973c", 00:14:34.779 "is_configured": true, 00:14:34.779 "data_offset": 0, 00:14:34.779 "data_size": 65536 00:14:34.779 }, 00:14:34.779 { 00:14:34.779 "name": "BaseBdev4", 00:14:34.779 "uuid": "f5e1e8f8-e79c-4644-9f58-6fcebda245f1", 00:14:34.779 "is_configured": true, 00:14:34.779 "data_offset": 0, 00:14:34.779 "data_size": 65536 00:14:34.779 } 00:14:34.779 ] 00:14:34.779 } 00:14:34.779 } 00:14:34.779 }' 00:14:34.779 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:35.038 BaseBdev2 00:14:35.038 BaseBdev3 00:14:35.038 BaseBdev4' 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.038 14:14:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.298 [2024-11-27 14:14:06.017498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:35.298 [2024-11-27 14:14:06.017574] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.298 [2024-11-27 14:14:06.017653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.298 "name": "Existed_Raid", 00:14:35.298 "uuid": "94de2327-e0cb-4a05-9f2e-b21c7874bb19", 00:14:35.298 "strip_size_kb": 64, 00:14:35.298 "state": "offline", 00:14:35.298 "raid_level": "raid0", 00:14:35.298 "superblock": false, 00:14:35.298 "num_base_bdevs": 4, 00:14:35.298 "num_base_bdevs_discovered": 3, 00:14:35.298 "num_base_bdevs_operational": 3, 00:14:35.298 "base_bdevs_list": [ 00:14:35.298 { 00:14:35.298 "name": null, 00:14:35.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.298 "is_configured": false, 00:14:35.298 "data_offset": 0, 00:14:35.298 "data_size": 65536 00:14:35.298 }, 00:14:35.298 { 00:14:35.298 "name": "BaseBdev2", 00:14:35.298 "uuid": "cc8e1fa6-2d94-4bb6-8ee9-ee3b77bed2ae", 00:14:35.298 "is_configured": true, 00:14:35.298 "data_offset": 0, 00:14:35.298 "data_size": 65536 00:14:35.298 }, 00:14:35.298 { 00:14:35.298 "name": "BaseBdev3", 00:14:35.298 "uuid": "bdb16521-dc1e-41ff-9c78-02e12032973c", 00:14:35.298 "is_configured": true, 00:14:35.298 "data_offset": 0, 00:14:35.298 "data_size": 65536 00:14:35.298 }, 00:14:35.298 { 00:14:35.298 "name": "BaseBdev4", 00:14:35.298 "uuid": "f5e1e8f8-e79c-4644-9f58-6fcebda245f1", 00:14:35.298 "is_configured": true, 00:14:35.298 "data_offset": 0, 00:14:35.298 "data_size": 65536 00:14:35.298 } 00:14:35.298 ] 00:14:35.298 }' 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.298 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.870 [2024-11-27 14:14:06.622476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.870 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.870 [2024-11-27 14:14:06.781338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:36.129 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.130 14:14:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.130 [2024-11-27 14:14:06.946051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:36.130 [2024-11-27 14:14:06.946190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:36.130 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.130 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:36.130 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:36.130 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:36.130 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.130 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.130 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.130 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.390 BaseBdev2 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.390 [ 00:14:36.390 { 00:14:36.390 "name": "BaseBdev2", 00:14:36.390 "aliases": [ 00:14:36.390 "6ecbd050-e194-4bc5-9d52-61c96dd6ba79" 00:14:36.390 ], 00:14:36.390 "product_name": "Malloc disk", 00:14:36.390 "block_size": 512, 00:14:36.390 "num_blocks": 65536, 00:14:36.390 "uuid": "6ecbd050-e194-4bc5-9d52-61c96dd6ba79", 00:14:36.390 "assigned_rate_limits": { 00:14:36.390 "rw_ios_per_sec": 0, 00:14:36.390 "rw_mbytes_per_sec": 0, 00:14:36.390 "r_mbytes_per_sec": 0, 00:14:36.390 "w_mbytes_per_sec": 0 00:14:36.390 }, 00:14:36.390 "claimed": false, 00:14:36.390 "zoned": false, 00:14:36.390 "supported_io_types": { 00:14:36.390 "read": true, 00:14:36.390 "write": true, 00:14:36.390 "unmap": true, 00:14:36.390 "flush": true, 00:14:36.390 "reset": true, 00:14:36.390 "nvme_admin": false, 00:14:36.390 "nvme_io": false, 00:14:36.390 "nvme_io_md": false, 00:14:36.390 "write_zeroes": true, 00:14:36.390 "zcopy": true, 00:14:36.390 "get_zone_info": false, 00:14:36.390 "zone_management": false, 00:14:36.390 "zone_append": false, 00:14:36.390 "compare": false, 00:14:36.390 "compare_and_write": false, 00:14:36.390 "abort": true, 00:14:36.390 "seek_hole": false, 00:14:36.390 "seek_data": false, 00:14:36.390 "copy": true, 00:14:36.390 "nvme_iov_md": false 00:14:36.390 }, 00:14:36.390 "memory_domains": [ 00:14:36.390 { 00:14:36.390 "dma_device_id": "system", 00:14:36.390 "dma_device_type": 1 00:14:36.390 }, 00:14:36.390 { 00:14:36.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.390 "dma_device_type": 2 00:14:36.390 } 00:14:36.390 ], 00:14:36.390 "driver_specific": {} 00:14:36.390 } 00:14:36.390 ] 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.390 BaseBdev3 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.390 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.390 [ 00:14:36.390 { 00:14:36.390 "name": "BaseBdev3", 00:14:36.390 "aliases": [ 00:14:36.390 "531dd484-7dff-4812-8567-b1295fc32d2b" 00:14:36.390 ], 00:14:36.390 "product_name": "Malloc disk", 00:14:36.390 "block_size": 512, 00:14:36.390 "num_blocks": 65536, 00:14:36.390 "uuid": "531dd484-7dff-4812-8567-b1295fc32d2b", 00:14:36.390 "assigned_rate_limits": { 00:14:36.390 "rw_ios_per_sec": 0, 00:14:36.390 "rw_mbytes_per_sec": 0, 00:14:36.390 "r_mbytes_per_sec": 0, 00:14:36.390 "w_mbytes_per_sec": 0 00:14:36.390 }, 00:14:36.390 "claimed": false, 00:14:36.390 "zoned": false, 00:14:36.390 "supported_io_types": { 00:14:36.390 "read": true, 00:14:36.390 "write": true, 00:14:36.390 "unmap": true, 00:14:36.390 "flush": true, 00:14:36.390 "reset": true, 00:14:36.390 "nvme_admin": false, 00:14:36.390 "nvme_io": false, 00:14:36.390 "nvme_io_md": false, 00:14:36.390 "write_zeroes": true, 00:14:36.390 "zcopy": true, 00:14:36.390 "get_zone_info": false, 00:14:36.390 "zone_management": false, 00:14:36.390 "zone_append": false, 00:14:36.390 "compare": false, 00:14:36.390 "compare_and_write": false, 00:14:36.390 "abort": true, 00:14:36.390 "seek_hole": false, 00:14:36.390 "seek_data": false, 00:14:36.390 "copy": true, 00:14:36.390 "nvme_iov_md": false 00:14:36.390 }, 00:14:36.390 "memory_domains": [ 00:14:36.390 { 00:14:36.390 "dma_device_id": "system", 00:14:36.390 "dma_device_type": 1 00:14:36.390 }, 00:14:36.390 { 00:14:36.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.390 "dma_device_type": 2 00:14:36.390 } 00:14:36.390 ], 00:14:36.390 "driver_specific": {} 00:14:36.390 } 00:14:36.391 ] 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.391 BaseBdev4 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.391 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.651 [ 00:14:36.651 { 00:14:36.651 "name": "BaseBdev4", 00:14:36.651 "aliases": [ 00:14:36.651 "778a4cbd-9358-4ea1-9f1b-7db800c9a3bb" 00:14:36.651 ], 00:14:36.651 "product_name": "Malloc disk", 00:14:36.651 "block_size": 512, 00:14:36.651 "num_blocks": 65536, 00:14:36.651 "uuid": "778a4cbd-9358-4ea1-9f1b-7db800c9a3bb", 00:14:36.651 "assigned_rate_limits": { 00:14:36.651 "rw_ios_per_sec": 0, 00:14:36.651 "rw_mbytes_per_sec": 0, 00:14:36.651 "r_mbytes_per_sec": 0, 00:14:36.651 "w_mbytes_per_sec": 0 00:14:36.651 }, 00:14:36.651 "claimed": false, 00:14:36.651 "zoned": false, 00:14:36.651 "supported_io_types": { 00:14:36.651 "read": true, 00:14:36.651 "write": true, 00:14:36.651 "unmap": true, 00:14:36.651 "flush": true, 00:14:36.651 "reset": true, 00:14:36.651 "nvme_admin": false, 00:14:36.651 "nvme_io": false, 00:14:36.651 "nvme_io_md": false, 00:14:36.651 "write_zeroes": true, 00:14:36.651 "zcopy": true, 00:14:36.651 "get_zone_info": false, 00:14:36.651 "zone_management": false, 00:14:36.651 "zone_append": false, 00:14:36.651 "compare": false, 00:14:36.651 "compare_and_write": false, 00:14:36.651 "abort": true, 00:14:36.651 "seek_hole": false, 00:14:36.651 "seek_data": false, 00:14:36.651 "copy": true, 00:14:36.651 "nvme_iov_md": false 00:14:36.651 }, 00:14:36.651 "memory_domains": [ 00:14:36.651 { 00:14:36.651 "dma_device_id": "system", 00:14:36.651 "dma_device_type": 1 00:14:36.651 }, 00:14:36.651 { 00:14:36.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.651 "dma_device_type": 2 00:14:36.651 } 00:14:36.651 ], 00:14:36.651 "driver_specific": {} 00:14:36.651 } 00:14:36.651 ] 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.651 [2024-11-27 14:14:07.375897] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.651 [2024-11-27 14:14:07.376007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.651 [2024-11-27 14:14:07.376059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.651 [2024-11-27 14:14:07.378165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.651 [2024-11-27 14:14:07.378264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.651 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.651 "name": "Existed_Raid", 00:14:36.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.651 "strip_size_kb": 64, 00:14:36.651 "state": "configuring", 00:14:36.651 "raid_level": "raid0", 00:14:36.651 "superblock": false, 00:14:36.651 "num_base_bdevs": 4, 00:14:36.651 "num_base_bdevs_discovered": 3, 00:14:36.651 "num_base_bdevs_operational": 4, 00:14:36.651 "base_bdevs_list": [ 00:14:36.651 { 00:14:36.651 "name": "BaseBdev1", 00:14:36.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.651 "is_configured": false, 00:14:36.651 "data_offset": 0, 00:14:36.651 "data_size": 0 00:14:36.651 }, 00:14:36.651 { 00:14:36.651 "name": "BaseBdev2", 00:14:36.651 "uuid": "6ecbd050-e194-4bc5-9d52-61c96dd6ba79", 00:14:36.651 "is_configured": true, 00:14:36.651 "data_offset": 0, 00:14:36.651 "data_size": 65536 00:14:36.651 }, 00:14:36.651 { 00:14:36.651 "name": "BaseBdev3", 00:14:36.651 "uuid": "531dd484-7dff-4812-8567-b1295fc32d2b", 00:14:36.651 "is_configured": true, 00:14:36.651 "data_offset": 0, 00:14:36.651 "data_size": 65536 00:14:36.651 }, 00:14:36.651 { 00:14:36.651 "name": "BaseBdev4", 00:14:36.651 "uuid": "778a4cbd-9358-4ea1-9f1b-7db800c9a3bb", 00:14:36.651 "is_configured": true, 00:14:36.652 "data_offset": 0, 00:14:36.652 "data_size": 65536 00:14:36.652 } 00:14:36.652 ] 00:14:36.652 }' 00:14:36.652 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.652 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.911 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:36.911 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.911 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.911 [2024-11-27 14:14:07.859094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:36.911 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.171 "name": "Existed_Raid", 00:14:37.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.171 "strip_size_kb": 64, 00:14:37.171 "state": "configuring", 00:14:37.171 "raid_level": "raid0", 00:14:37.171 "superblock": false, 00:14:37.171 "num_base_bdevs": 4, 00:14:37.171 "num_base_bdevs_discovered": 2, 00:14:37.171 "num_base_bdevs_operational": 4, 00:14:37.171 "base_bdevs_list": [ 00:14:37.171 { 00:14:37.171 "name": "BaseBdev1", 00:14:37.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.171 "is_configured": false, 00:14:37.171 "data_offset": 0, 00:14:37.171 "data_size": 0 00:14:37.171 }, 00:14:37.171 { 00:14:37.171 "name": null, 00:14:37.171 "uuid": "6ecbd050-e194-4bc5-9d52-61c96dd6ba79", 00:14:37.171 "is_configured": false, 00:14:37.171 "data_offset": 0, 00:14:37.171 "data_size": 65536 00:14:37.171 }, 00:14:37.171 { 00:14:37.171 "name": "BaseBdev3", 00:14:37.171 "uuid": "531dd484-7dff-4812-8567-b1295fc32d2b", 00:14:37.171 "is_configured": true, 00:14:37.171 "data_offset": 0, 00:14:37.171 "data_size": 65536 00:14:37.171 }, 00:14:37.171 { 00:14:37.171 "name": "BaseBdev4", 00:14:37.171 "uuid": "778a4cbd-9358-4ea1-9f1b-7db800c9a3bb", 00:14:37.171 "is_configured": true, 00:14:37.171 "data_offset": 0, 00:14:37.171 "data_size": 65536 00:14:37.171 } 00:14:37.171 ] 00:14:37.171 }' 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.171 14:14:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.430 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.430 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.431 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:37.431 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.431 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.431 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:37.431 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:37.431 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.431 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.690 [2024-11-27 14:14:08.413228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.690 BaseBdev1 00:14:37.690 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.690 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:37.690 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:37.690 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:37.690 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.691 [ 00:14:37.691 { 00:14:37.691 "name": "BaseBdev1", 00:14:37.691 "aliases": [ 00:14:37.691 "8594927b-30b5-4a7b-90c9-d799c32104f1" 00:14:37.691 ], 00:14:37.691 "product_name": "Malloc disk", 00:14:37.691 "block_size": 512, 00:14:37.691 "num_blocks": 65536, 00:14:37.691 "uuid": "8594927b-30b5-4a7b-90c9-d799c32104f1", 00:14:37.691 "assigned_rate_limits": { 00:14:37.691 "rw_ios_per_sec": 0, 00:14:37.691 "rw_mbytes_per_sec": 0, 00:14:37.691 "r_mbytes_per_sec": 0, 00:14:37.691 "w_mbytes_per_sec": 0 00:14:37.691 }, 00:14:37.691 "claimed": true, 00:14:37.691 "claim_type": "exclusive_write", 00:14:37.691 "zoned": false, 00:14:37.691 "supported_io_types": { 00:14:37.691 "read": true, 00:14:37.691 "write": true, 00:14:37.691 "unmap": true, 00:14:37.691 "flush": true, 00:14:37.691 "reset": true, 00:14:37.691 "nvme_admin": false, 00:14:37.691 "nvme_io": false, 00:14:37.691 "nvme_io_md": false, 00:14:37.691 "write_zeroes": true, 00:14:37.691 "zcopy": true, 00:14:37.691 "get_zone_info": false, 00:14:37.691 "zone_management": false, 00:14:37.691 "zone_append": false, 00:14:37.691 "compare": false, 00:14:37.691 "compare_and_write": false, 00:14:37.691 "abort": true, 00:14:37.691 "seek_hole": false, 00:14:37.691 "seek_data": false, 00:14:37.691 "copy": true, 00:14:37.691 "nvme_iov_md": false 00:14:37.691 }, 00:14:37.691 "memory_domains": [ 00:14:37.691 { 00:14:37.691 "dma_device_id": "system", 00:14:37.691 "dma_device_type": 1 00:14:37.691 }, 00:14:37.691 { 00:14:37.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.691 "dma_device_type": 2 00:14:37.691 } 00:14:37.691 ], 00:14:37.691 "driver_specific": {} 00:14:37.691 } 00:14:37.691 ] 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.691 "name": "Existed_Raid", 00:14:37.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.691 "strip_size_kb": 64, 00:14:37.691 "state": "configuring", 00:14:37.691 "raid_level": "raid0", 00:14:37.691 "superblock": false, 00:14:37.691 "num_base_bdevs": 4, 00:14:37.691 "num_base_bdevs_discovered": 3, 00:14:37.691 "num_base_bdevs_operational": 4, 00:14:37.691 "base_bdevs_list": [ 00:14:37.691 { 00:14:37.691 "name": "BaseBdev1", 00:14:37.691 "uuid": "8594927b-30b5-4a7b-90c9-d799c32104f1", 00:14:37.691 "is_configured": true, 00:14:37.691 "data_offset": 0, 00:14:37.691 "data_size": 65536 00:14:37.691 }, 00:14:37.691 { 00:14:37.691 "name": null, 00:14:37.691 "uuid": "6ecbd050-e194-4bc5-9d52-61c96dd6ba79", 00:14:37.691 "is_configured": false, 00:14:37.691 "data_offset": 0, 00:14:37.691 "data_size": 65536 00:14:37.691 }, 00:14:37.691 { 00:14:37.691 "name": "BaseBdev3", 00:14:37.691 "uuid": "531dd484-7dff-4812-8567-b1295fc32d2b", 00:14:37.691 "is_configured": true, 00:14:37.691 "data_offset": 0, 00:14:37.691 "data_size": 65536 00:14:37.691 }, 00:14:37.691 { 00:14:37.691 "name": "BaseBdev4", 00:14:37.691 "uuid": "778a4cbd-9358-4ea1-9f1b-7db800c9a3bb", 00:14:37.691 "is_configured": true, 00:14:37.691 "data_offset": 0, 00:14:37.691 "data_size": 65536 00:14:37.691 } 00:14:37.691 ] 00:14:37.691 }' 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.691 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.259 [2024-11-27 14:14:08.964413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.259 14:14:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.259 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.259 "name": "Existed_Raid", 00:14:38.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.259 "strip_size_kb": 64, 00:14:38.259 "state": "configuring", 00:14:38.259 "raid_level": "raid0", 00:14:38.259 "superblock": false, 00:14:38.259 "num_base_bdevs": 4, 00:14:38.259 "num_base_bdevs_discovered": 2, 00:14:38.259 "num_base_bdevs_operational": 4, 00:14:38.259 "base_bdevs_list": [ 00:14:38.259 { 00:14:38.259 "name": "BaseBdev1", 00:14:38.259 "uuid": "8594927b-30b5-4a7b-90c9-d799c32104f1", 00:14:38.259 "is_configured": true, 00:14:38.259 "data_offset": 0, 00:14:38.259 "data_size": 65536 00:14:38.259 }, 00:14:38.259 { 00:14:38.259 "name": null, 00:14:38.259 "uuid": "6ecbd050-e194-4bc5-9d52-61c96dd6ba79", 00:14:38.259 "is_configured": false, 00:14:38.259 "data_offset": 0, 00:14:38.259 "data_size": 65536 00:14:38.259 }, 00:14:38.259 { 00:14:38.259 "name": null, 00:14:38.259 "uuid": "531dd484-7dff-4812-8567-b1295fc32d2b", 00:14:38.259 "is_configured": false, 00:14:38.259 "data_offset": 0, 00:14:38.259 "data_size": 65536 00:14:38.259 }, 00:14:38.259 { 00:14:38.259 "name": "BaseBdev4", 00:14:38.259 "uuid": "778a4cbd-9358-4ea1-9f1b-7db800c9a3bb", 00:14:38.259 "is_configured": true, 00:14:38.259 "data_offset": 0, 00:14:38.259 "data_size": 65536 00:14:38.259 } 00:14:38.259 ] 00:14:38.259 }' 00:14:38.259 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.259 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.516 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.516 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.516 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.516 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:38.516 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.774 [2024-11-27 14:14:09.491549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.774 "name": "Existed_Raid", 00:14:38.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.774 "strip_size_kb": 64, 00:14:38.774 "state": "configuring", 00:14:38.774 "raid_level": "raid0", 00:14:38.774 "superblock": false, 00:14:38.774 "num_base_bdevs": 4, 00:14:38.774 "num_base_bdevs_discovered": 3, 00:14:38.774 "num_base_bdevs_operational": 4, 00:14:38.774 "base_bdevs_list": [ 00:14:38.774 { 00:14:38.774 "name": "BaseBdev1", 00:14:38.774 "uuid": "8594927b-30b5-4a7b-90c9-d799c32104f1", 00:14:38.774 "is_configured": true, 00:14:38.774 "data_offset": 0, 00:14:38.774 "data_size": 65536 00:14:38.774 }, 00:14:38.774 { 00:14:38.774 "name": null, 00:14:38.774 "uuid": "6ecbd050-e194-4bc5-9d52-61c96dd6ba79", 00:14:38.774 "is_configured": false, 00:14:38.774 "data_offset": 0, 00:14:38.774 "data_size": 65536 00:14:38.774 }, 00:14:38.774 { 00:14:38.774 "name": "BaseBdev3", 00:14:38.774 "uuid": "531dd484-7dff-4812-8567-b1295fc32d2b", 00:14:38.774 "is_configured": true, 00:14:38.774 "data_offset": 0, 00:14:38.774 "data_size": 65536 00:14:38.774 }, 00:14:38.774 { 00:14:38.774 "name": "BaseBdev4", 00:14:38.774 "uuid": "778a4cbd-9358-4ea1-9f1b-7db800c9a3bb", 00:14:38.774 "is_configured": true, 00:14:38.774 "data_offset": 0, 00:14:38.774 "data_size": 65536 00:14:38.774 } 00:14:38.774 ] 00:14:38.774 }' 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.774 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.033 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.033 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:39.033 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.033 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.033 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.033 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:39.033 14:14:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:39.033 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.033 14:14:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.033 [2024-11-27 14:14:09.978773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.291 "name": "Existed_Raid", 00:14:39.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.291 "strip_size_kb": 64, 00:14:39.291 "state": "configuring", 00:14:39.291 "raid_level": "raid0", 00:14:39.291 "superblock": false, 00:14:39.291 "num_base_bdevs": 4, 00:14:39.291 "num_base_bdevs_discovered": 2, 00:14:39.291 "num_base_bdevs_operational": 4, 00:14:39.291 "base_bdevs_list": [ 00:14:39.291 { 00:14:39.291 "name": null, 00:14:39.291 "uuid": "8594927b-30b5-4a7b-90c9-d799c32104f1", 00:14:39.291 "is_configured": false, 00:14:39.291 "data_offset": 0, 00:14:39.291 "data_size": 65536 00:14:39.291 }, 00:14:39.291 { 00:14:39.291 "name": null, 00:14:39.291 "uuid": "6ecbd050-e194-4bc5-9d52-61c96dd6ba79", 00:14:39.291 "is_configured": false, 00:14:39.291 "data_offset": 0, 00:14:39.291 "data_size": 65536 00:14:39.291 }, 00:14:39.291 { 00:14:39.291 "name": "BaseBdev3", 00:14:39.291 "uuid": "531dd484-7dff-4812-8567-b1295fc32d2b", 00:14:39.291 "is_configured": true, 00:14:39.291 "data_offset": 0, 00:14:39.291 "data_size": 65536 00:14:39.291 }, 00:14:39.291 { 00:14:39.291 "name": "BaseBdev4", 00:14:39.291 "uuid": "778a4cbd-9358-4ea1-9f1b-7db800c9a3bb", 00:14:39.291 "is_configured": true, 00:14:39.291 "data_offset": 0, 00:14:39.291 "data_size": 65536 00:14:39.291 } 00:14:39.291 ] 00:14:39.291 }' 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.291 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.857 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.857 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:39.857 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.858 [2024-11-27 14:14:10.608327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.858 "name": "Existed_Raid", 00:14:39.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.858 "strip_size_kb": 64, 00:14:39.858 "state": "configuring", 00:14:39.858 "raid_level": "raid0", 00:14:39.858 "superblock": false, 00:14:39.858 "num_base_bdevs": 4, 00:14:39.858 "num_base_bdevs_discovered": 3, 00:14:39.858 "num_base_bdevs_operational": 4, 00:14:39.858 "base_bdevs_list": [ 00:14:39.858 { 00:14:39.858 "name": null, 00:14:39.858 "uuid": "8594927b-30b5-4a7b-90c9-d799c32104f1", 00:14:39.858 "is_configured": false, 00:14:39.858 "data_offset": 0, 00:14:39.858 "data_size": 65536 00:14:39.858 }, 00:14:39.858 { 00:14:39.858 "name": "BaseBdev2", 00:14:39.858 "uuid": "6ecbd050-e194-4bc5-9d52-61c96dd6ba79", 00:14:39.858 "is_configured": true, 00:14:39.858 "data_offset": 0, 00:14:39.858 "data_size": 65536 00:14:39.858 }, 00:14:39.858 { 00:14:39.858 "name": "BaseBdev3", 00:14:39.858 "uuid": "531dd484-7dff-4812-8567-b1295fc32d2b", 00:14:39.858 "is_configured": true, 00:14:39.858 "data_offset": 0, 00:14:39.858 "data_size": 65536 00:14:39.858 }, 00:14:39.858 { 00:14:39.858 "name": "BaseBdev4", 00:14:39.858 "uuid": "778a4cbd-9358-4ea1-9f1b-7db800c9a3bb", 00:14:39.858 "is_configured": true, 00:14:39.858 "data_offset": 0, 00:14:39.858 "data_size": 65536 00:14:39.858 } 00:14:39.858 ] 00:14:39.858 }' 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.858 14:14:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8594927b-30b5-4a7b-90c9-d799c32104f1 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.426 [2024-11-27 14:14:11.206067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:40.426 [2024-11-27 14:14:11.206139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:40.426 [2024-11-27 14:14:11.206147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:40.426 [2024-11-27 14:14:11.206425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:40.426 [2024-11-27 14:14:11.206571] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:40.426 [2024-11-27 14:14:11.206586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:40.426 [2024-11-27 14:14:11.206854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.426 NewBaseBdev 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.426 [ 00:14:40.426 { 00:14:40.426 "name": "NewBaseBdev", 00:14:40.426 "aliases": [ 00:14:40.426 "8594927b-30b5-4a7b-90c9-d799c32104f1" 00:14:40.426 ], 00:14:40.426 "product_name": "Malloc disk", 00:14:40.426 "block_size": 512, 00:14:40.426 "num_blocks": 65536, 00:14:40.426 "uuid": "8594927b-30b5-4a7b-90c9-d799c32104f1", 00:14:40.426 "assigned_rate_limits": { 00:14:40.426 "rw_ios_per_sec": 0, 00:14:40.426 "rw_mbytes_per_sec": 0, 00:14:40.426 "r_mbytes_per_sec": 0, 00:14:40.426 "w_mbytes_per_sec": 0 00:14:40.426 }, 00:14:40.426 "claimed": true, 00:14:40.426 "claim_type": "exclusive_write", 00:14:40.426 "zoned": false, 00:14:40.426 "supported_io_types": { 00:14:40.426 "read": true, 00:14:40.426 "write": true, 00:14:40.426 "unmap": true, 00:14:40.426 "flush": true, 00:14:40.426 "reset": true, 00:14:40.426 "nvme_admin": false, 00:14:40.426 "nvme_io": false, 00:14:40.426 "nvme_io_md": false, 00:14:40.426 "write_zeroes": true, 00:14:40.426 "zcopy": true, 00:14:40.426 "get_zone_info": false, 00:14:40.426 "zone_management": false, 00:14:40.426 "zone_append": false, 00:14:40.426 "compare": false, 00:14:40.426 "compare_and_write": false, 00:14:40.426 "abort": true, 00:14:40.426 "seek_hole": false, 00:14:40.426 "seek_data": false, 00:14:40.426 "copy": true, 00:14:40.426 "nvme_iov_md": false 00:14:40.426 }, 00:14:40.426 "memory_domains": [ 00:14:40.426 { 00:14:40.426 "dma_device_id": "system", 00:14:40.426 "dma_device_type": 1 00:14:40.426 }, 00:14:40.426 { 00:14:40.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.426 "dma_device_type": 2 00:14:40.426 } 00:14:40.426 ], 00:14:40.426 "driver_specific": {} 00:14:40.426 } 00:14:40.426 ] 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.426 "name": "Existed_Raid", 00:14:40.426 "uuid": "e8b9198b-14e1-4841-a183-629e0ecac6fd", 00:14:40.426 "strip_size_kb": 64, 00:14:40.426 "state": "online", 00:14:40.426 "raid_level": "raid0", 00:14:40.426 "superblock": false, 00:14:40.426 "num_base_bdevs": 4, 00:14:40.426 "num_base_bdevs_discovered": 4, 00:14:40.426 "num_base_bdevs_operational": 4, 00:14:40.426 "base_bdevs_list": [ 00:14:40.426 { 00:14:40.426 "name": "NewBaseBdev", 00:14:40.426 "uuid": "8594927b-30b5-4a7b-90c9-d799c32104f1", 00:14:40.426 "is_configured": true, 00:14:40.426 "data_offset": 0, 00:14:40.426 "data_size": 65536 00:14:40.426 }, 00:14:40.426 { 00:14:40.426 "name": "BaseBdev2", 00:14:40.426 "uuid": "6ecbd050-e194-4bc5-9d52-61c96dd6ba79", 00:14:40.426 "is_configured": true, 00:14:40.426 "data_offset": 0, 00:14:40.426 "data_size": 65536 00:14:40.426 }, 00:14:40.426 { 00:14:40.426 "name": "BaseBdev3", 00:14:40.426 "uuid": "531dd484-7dff-4812-8567-b1295fc32d2b", 00:14:40.426 "is_configured": true, 00:14:40.426 "data_offset": 0, 00:14:40.426 "data_size": 65536 00:14:40.426 }, 00:14:40.426 { 00:14:40.426 "name": "BaseBdev4", 00:14:40.426 "uuid": "778a4cbd-9358-4ea1-9f1b-7db800c9a3bb", 00:14:40.426 "is_configured": true, 00:14:40.426 "data_offset": 0, 00:14:40.426 "data_size": 65536 00:14:40.426 } 00:14:40.426 ] 00:14:40.426 }' 00:14:40.426 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.427 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.997 [2024-11-27 14:14:11.741686] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:40.997 "name": "Existed_Raid", 00:14:40.997 "aliases": [ 00:14:40.997 "e8b9198b-14e1-4841-a183-629e0ecac6fd" 00:14:40.997 ], 00:14:40.997 "product_name": "Raid Volume", 00:14:40.997 "block_size": 512, 00:14:40.997 "num_blocks": 262144, 00:14:40.997 "uuid": "e8b9198b-14e1-4841-a183-629e0ecac6fd", 00:14:40.997 "assigned_rate_limits": { 00:14:40.997 "rw_ios_per_sec": 0, 00:14:40.997 "rw_mbytes_per_sec": 0, 00:14:40.997 "r_mbytes_per_sec": 0, 00:14:40.997 "w_mbytes_per_sec": 0 00:14:40.997 }, 00:14:40.997 "claimed": false, 00:14:40.997 "zoned": false, 00:14:40.997 "supported_io_types": { 00:14:40.997 "read": true, 00:14:40.997 "write": true, 00:14:40.997 "unmap": true, 00:14:40.997 "flush": true, 00:14:40.997 "reset": true, 00:14:40.997 "nvme_admin": false, 00:14:40.997 "nvme_io": false, 00:14:40.997 "nvme_io_md": false, 00:14:40.997 "write_zeroes": true, 00:14:40.997 "zcopy": false, 00:14:40.997 "get_zone_info": false, 00:14:40.997 "zone_management": false, 00:14:40.997 "zone_append": false, 00:14:40.997 "compare": false, 00:14:40.997 "compare_and_write": false, 00:14:40.997 "abort": false, 00:14:40.997 "seek_hole": false, 00:14:40.997 "seek_data": false, 00:14:40.997 "copy": false, 00:14:40.997 "nvme_iov_md": false 00:14:40.997 }, 00:14:40.997 "memory_domains": [ 00:14:40.997 { 00:14:40.997 "dma_device_id": "system", 00:14:40.997 "dma_device_type": 1 00:14:40.997 }, 00:14:40.997 { 00:14:40.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.997 "dma_device_type": 2 00:14:40.997 }, 00:14:40.997 { 00:14:40.997 "dma_device_id": "system", 00:14:40.997 "dma_device_type": 1 00:14:40.997 }, 00:14:40.997 { 00:14:40.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.997 "dma_device_type": 2 00:14:40.997 }, 00:14:40.997 { 00:14:40.997 "dma_device_id": "system", 00:14:40.997 "dma_device_type": 1 00:14:40.997 }, 00:14:40.997 { 00:14:40.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.997 "dma_device_type": 2 00:14:40.997 }, 00:14:40.997 { 00:14:40.997 "dma_device_id": "system", 00:14:40.997 "dma_device_type": 1 00:14:40.997 }, 00:14:40.997 { 00:14:40.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.997 "dma_device_type": 2 00:14:40.997 } 00:14:40.997 ], 00:14:40.997 "driver_specific": { 00:14:40.997 "raid": { 00:14:40.997 "uuid": "e8b9198b-14e1-4841-a183-629e0ecac6fd", 00:14:40.997 "strip_size_kb": 64, 00:14:40.997 "state": "online", 00:14:40.997 "raid_level": "raid0", 00:14:40.997 "superblock": false, 00:14:40.997 "num_base_bdevs": 4, 00:14:40.997 "num_base_bdevs_discovered": 4, 00:14:40.997 "num_base_bdevs_operational": 4, 00:14:40.997 "base_bdevs_list": [ 00:14:40.997 { 00:14:40.997 "name": "NewBaseBdev", 00:14:40.997 "uuid": "8594927b-30b5-4a7b-90c9-d799c32104f1", 00:14:40.997 "is_configured": true, 00:14:40.997 "data_offset": 0, 00:14:40.997 "data_size": 65536 00:14:40.997 }, 00:14:40.997 { 00:14:40.997 "name": "BaseBdev2", 00:14:40.997 "uuid": "6ecbd050-e194-4bc5-9d52-61c96dd6ba79", 00:14:40.997 "is_configured": true, 00:14:40.997 "data_offset": 0, 00:14:40.997 "data_size": 65536 00:14:40.997 }, 00:14:40.997 { 00:14:40.997 "name": "BaseBdev3", 00:14:40.997 "uuid": "531dd484-7dff-4812-8567-b1295fc32d2b", 00:14:40.997 "is_configured": true, 00:14:40.997 "data_offset": 0, 00:14:40.997 "data_size": 65536 00:14:40.997 }, 00:14:40.997 { 00:14:40.997 "name": "BaseBdev4", 00:14:40.997 "uuid": "778a4cbd-9358-4ea1-9f1b-7db800c9a3bb", 00:14:40.997 "is_configured": true, 00:14:40.997 "data_offset": 0, 00:14:40.997 "data_size": 65536 00:14:40.997 } 00:14:40.997 ] 00:14:40.997 } 00:14:40.997 } 00:14:40.997 }' 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:40.997 BaseBdev2 00:14:40.997 BaseBdev3 00:14:40.997 BaseBdev4' 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.997 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.257 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.257 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.257 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.257 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.257 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:41.257 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.257 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.257 14:14:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.257 14:14:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.257 [2024-11-27 14:14:12.076731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.257 [2024-11-27 14:14:12.076767] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.257 [2024-11-27 14:14:12.076869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.257 [2024-11-27 14:14:12.076949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.257 [2024-11-27 14:14:12.076961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69598 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69598 ']' 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69598 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69598 00:14:41.257 killing process with pid 69598 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.257 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.258 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69598' 00:14:41.258 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69598 00:14:41.258 [2024-11-27 14:14:12.122790] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.258 14:14:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69598 00:14:41.840 [2024-11-27 14:14:12.594166] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:43.221 00:14:43.221 real 0m12.197s 00:14:43.221 user 0m19.411s 00:14:43.221 sys 0m2.065s 00:14:43.221 ************************************ 00:14:43.221 END TEST raid_state_function_test 00:14:43.221 ************************************ 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.221 14:14:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:14:43.221 14:14:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:43.221 14:14:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.221 14:14:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.221 ************************************ 00:14:43.221 START TEST raid_state_function_test_sb 00:14:43.221 ************************************ 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:43.221 Process raid pid: 70269 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70269 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70269' 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70269 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70269 ']' 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.221 14:14:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.221 [2024-11-27 14:14:13.968926] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:43.221 [2024-11-27 14:14:13.969153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.221 [2024-11-27 14:14:14.146753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.481 [2024-11-27 14:14:14.270562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.740 [2024-11-27 14:14:14.498093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.740 [2024-11-27 14:14:14.498155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.000 [2024-11-27 14:14:14.894661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.000 [2024-11-27 14:14:14.894813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.000 [2024-11-27 14:14:14.894830] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.000 [2024-11-27 14:14:14.894842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.000 [2024-11-27 14:14:14.894850] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.000 [2024-11-27 14:14:14.894860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.000 [2024-11-27 14:14:14.894866] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:44.000 [2024-11-27 14:14:14.894876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.000 14:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.259 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.259 "name": "Existed_Raid", 00:14:44.259 "uuid": "a7a39c02-21a7-4f08-8ca8-96af7e3dee08", 00:14:44.259 "strip_size_kb": 64, 00:14:44.259 "state": "configuring", 00:14:44.259 "raid_level": "raid0", 00:14:44.259 "superblock": true, 00:14:44.259 "num_base_bdevs": 4, 00:14:44.259 "num_base_bdevs_discovered": 0, 00:14:44.259 "num_base_bdevs_operational": 4, 00:14:44.259 "base_bdevs_list": [ 00:14:44.259 { 00:14:44.259 "name": "BaseBdev1", 00:14:44.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.259 "is_configured": false, 00:14:44.259 "data_offset": 0, 00:14:44.259 "data_size": 0 00:14:44.259 }, 00:14:44.259 { 00:14:44.259 "name": "BaseBdev2", 00:14:44.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.259 "is_configured": false, 00:14:44.259 "data_offset": 0, 00:14:44.259 "data_size": 0 00:14:44.259 }, 00:14:44.259 { 00:14:44.259 "name": "BaseBdev3", 00:14:44.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.259 "is_configured": false, 00:14:44.259 "data_offset": 0, 00:14:44.259 "data_size": 0 00:14:44.259 }, 00:14:44.259 { 00:14:44.259 "name": "BaseBdev4", 00:14:44.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.259 "is_configured": false, 00:14:44.259 "data_offset": 0, 00:14:44.259 "data_size": 0 00:14:44.259 } 00:14:44.259 ] 00:14:44.259 }' 00:14:44.259 14:14:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.259 14:14:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.518 [2024-11-27 14:14:15.353787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.518 [2024-11-27 14:14:15.353893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.518 [2024-11-27 14:14:15.361767] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.518 [2024-11-27 14:14:15.361853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.518 [2024-11-27 14:14:15.361882] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.518 [2024-11-27 14:14:15.361905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.518 [2024-11-27 14:14:15.361924] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.518 [2024-11-27 14:14:15.361945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.518 [2024-11-27 14:14:15.361985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:44.518 [2024-11-27 14:14:15.362031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.518 [2024-11-27 14:14:15.406013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.518 BaseBdev1 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.518 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.518 [ 00:14:44.518 { 00:14:44.518 "name": "BaseBdev1", 00:14:44.518 "aliases": [ 00:14:44.518 "58dbb776-358e-4718-b6d0-c4546af00f0d" 00:14:44.518 ], 00:14:44.519 "product_name": "Malloc disk", 00:14:44.519 "block_size": 512, 00:14:44.519 "num_blocks": 65536, 00:14:44.519 "uuid": "58dbb776-358e-4718-b6d0-c4546af00f0d", 00:14:44.519 "assigned_rate_limits": { 00:14:44.519 "rw_ios_per_sec": 0, 00:14:44.519 "rw_mbytes_per_sec": 0, 00:14:44.519 "r_mbytes_per_sec": 0, 00:14:44.519 "w_mbytes_per_sec": 0 00:14:44.519 }, 00:14:44.519 "claimed": true, 00:14:44.519 "claim_type": "exclusive_write", 00:14:44.519 "zoned": false, 00:14:44.519 "supported_io_types": { 00:14:44.519 "read": true, 00:14:44.519 "write": true, 00:14:44.519 "unmap": true, 00:14:44.519 "flush": true, 00:14:44.519 "reset": true, 00:14:44.519 "nvme_admin": false, 00:14:44.519 "nvme_io": false, 00:14:44.519 "nvme_io_md": false, 00:14:44.519 "write_zeroes": true, 00:14:44.519 "zcopy": true, 00:14:44.519 "get_zone_info": false, 00:14:44.519 "zone_management": false, 00:14:44.519 "zone_append": false, 00:14:44.519 "compare": false, 00:14:44.519 "compare_and_write": false, 00:14:44.519 "abort": true, 00:14:44.519 "seek_hole": false, 00:14:44.519 "seek_data": false, 00:14:44.519 "copy": true, 00:14:44.519 "nvme_iov_md": false 00:14:44.519 }, 00:14:44.519 "memory_domains": [ 00:14:44.519 { 00:14:44.519 "dma_device_id": "system", 00:14:44.519 "dma_device_type": 1 00:14:44.519 }, 00:14:44.519 { 00:14:44.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.519 "dma_device_type": 2 00:14:44.519 } 00:14:44.519 ], 00:14:44.519 "driver_specific": {} 00:14:44.519 } 00:14:44.519 ] 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.519 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.777 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.777 "name": "Existed_Raid", 00:14:44.777 "uuid": "7174809e-1d87-453e-9f6d-a316b4fe149e", 00:14:44.777 "strip_size_kb": 64, 00:14:44.777 "state": "configuring", 00:14:44.777 "raid_level": "raid0", 00:14:44.777 "superblock": true, 00:14:44.777 "num_base_bdevs": 4, 00:14:44.777 "num_base_bdevs_discovered": 1, 00:14:44.777 "num_base_bdevs_operational": 4, 00:14:44.777 "base_bdevs_list": [ 00:14:44.777 { 00:14:44.777 "name": "BaseBdev1", 00:14:44.777 "uuid": "58dbb776-358e-4718-b6d0-c4546af00f0d", 00:14:44.777 "is_configured": true, 00:14:44.777 "data_offset": 2048, 00:14:44.777 "data_size": 63488 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "name": "BaseBdev2", 00:14:44.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.777 "is_configured": false, 00:14:44.777 "data_offset": 0, 00:14:44.777 "data_size": 0 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "name": "BaseBdev3", 00:14:44.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.777 "is_configured": false, 00:14:44.777 "data_offset": 0, 00:14:44.777 "data_size": 0 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "name": "BaseBdev4", 00:14:44.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.777 "is_configured": false, 00:14:44.777 "data_offset": 0, 00:14:44.777 "data_size": 0 00:14:44.777 } 00:14:44.777 ] 00:14:44.777 }' 00:14:44.777 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.777 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.036 [2024-11-27 14:14:15.913307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.036 [2024-11-27 14:14:15.913460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.036 [2024-11-27 14:14:15.925369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.036 [2024-11-27 14:14:15.927419] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.036 [2024-11-27 14:14:15.927503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.036 [2024-11-27 14:14:15.927542] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.036 [2024-11-27 14:14:15.927579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.036 [2024-11-27 14:14:15.927610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:45.036 [2024-11-27 14:14:15.927643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.036 "name": "Existed_Raid", 00:14:45.036 "uuid": "2a0cf5f4-ee4d-42e0-9e64-f77c45d4b14d", 00:14:45.036 "strip_size_kb": 64, 00:14:45.036 "state": "configuring", 00:14:45.036 "raid_level": "raid0", 00:14:45.036 "superblock": true, 00:14:45.036 "num_base_bdevs": 4, 00:14:45.036 "num_base_bdevs_discovered": 1, 00:14:45.036 "num_base_bdevs_operational": 4, 00:14:45.036 "base_bdevs_list": [ 00:14:45.036 { 00:14:45.036 "name": "BaseBdev1", 00:14:45.036 "uuid": "58dbb776-358e-4718-b6d0-c4546af00f0d", 00:14:45.036 "is_configured": true, 00:14:45.036 "data_offset": 2048, 00:14:45.036 "data_size": 63488 00:14:45.036 }, 00:14:45.036 { 00:14:45.036 "name": "BaseBdev2", 00:14:45.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.036 "is_configured": false, 00:14:45.036 "data_offset": 0, 00:14:45.036 "data_size": 0 00:14:45.036 }, 00:14:45.036 { 00:14:45.036 "name": "BaseBdev3", 00:14:45.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.036 "is_configured": false, 00:14:45.036 "data_offset": 0, 00:14:45.036 "data_size": 0 00:14:45.036 }, 00:14:45.036 { 00:14:45.036 "name": "BaseBdev4", 00:14:45.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.036 "is_configured": false, 00:14:45.036 "data_offset": 0, 00:14:45.036 "data_size": 0 00:14:45.036 } 00:14:45.036 ] 00:14:45.036 }' 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.036 14:14:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.653 [2024-11-27 14:14:16.406293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.653 BaseBdev2 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.653 [ 00:14:45.653 { 00:14:45.653 "name": "BaseBdev2", 00:14:45.653 "aliases": [ 00:14:45.653 "f04c04c1-1ef2-4bb8-ba1e-7a1a36a7e923" 00:14:45.653 ], 00:14:45.653 "product_name": "Malloc disk", 00:14:45.653 "block_size": 512, 00:14:45.653 "num_blocks": 65536, 00:14:45.653 "uuid": "f04c04c1-1ef2-4bb8-ba1e-7a1a36a7e923", 00:14:45.653 "assigned_rate_limits": { 00:14:45.653 "rw_ios_per_sec": 0, 00:14:45.653 "rw_mbytes_per_sec": 0, 00:14:45.653 "r_mbytes_per_sec": 0, 00:14:45.653 "w_mbytes_per_sec": 0 00:14:45.653 }, 00:14:45.653 "claimed": true, 00:14:45.653 "claim_type": "exclusive_write", 00:14:45.653 "zoned": false, 00:14:45.653 "supported_io_types": { 00:14:45.653 "read": true, 00:14:45.653 "write": true, 00:14:45.653 "unmap": true, 00:14:45.653 "flush": true, 00:14:45.653 "reset": true, 00:14:45.653 "nvme_admin": false, 00:14:45.653 "nvme_io": false, 00:14:45.653 "nvme_io_md": false, 00:14:45.653 "write_zeroes": true, 00:14:45.653 "zcopy": true, 00:14:45.653 "get_zone_info": false, 00:14:45.653 "zone_management": false, 00:14:45.653 "zone_append": false, 00:14:45.653 "compare": false, 00:14:45.653 "compare_and_write": false, 00:14:45.653 "abort": true, 00:14:45.653 "seek_hole": false, 00:14:45.653 "seek_data": false, 00:14:45.653 "copy": true, 00:14:45.653 "nvme_iov_md": false 00:14:45.653 }, 00:14:45.653 "memory_domains": [ 00:14:45.653 { 00:14:45.653 "dma_device_id": "system", 00:14:45.653 "dma_device_type": 1 00:14:45.653 }, 00:14:45.653 { 00:14:45.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.653 "dma_device_type": 2 00:14:45.653 } 00:14:45.653 ], 00:14:45.653 "driver_specific": {} 00:14:45.653 } 00:14:45.653 ] 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.653 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.653 "name": "Existed_Raid", 00:14:45.653 "uuid": "2a0cf5f4-ee4d-42e0-9e64-f77c45d4b14d", 00:14:45.653 "strip_size_kb": 64, 00:14:45.653 "state": "configuring", 00:14:45.653 "raid_level": "raid0", 00:14:45.653 "superblock": true, 00:14:45.653 "num_base_bdevs": 4, 00:14:45.653 "num_base_bdevs_discovered": 2, 00:14:45.653 "num_base_bdevs_operational": 4, 00:14:45.653 "base_bdevs_list": [ 00:14:45.653 { 00:14:45.653 "name": "BaseBdev1", 00:14:45.653 "uuid": "58dbb776-358e-4718-b6d0-c4546af00f0d", 00:14:45.653 "is_configured": true, 00:14:45.654 "data_offset": 2048, 00:14:45.654 "data_size": 63488 00:14:45.654 }, 00:14:45.654 { 00:14:45.654 "name": "BaseBdev2", 00:14:45.654 "uuid": "f04c04c1-1ef2-4bb8-ba1e-7a1a36a7e923", 00:14:45.654 "is_configured": true, 00:14:45.654 "data_offset": 2048, 00:14:45.654 "data_size": 63488 00:14:45.654 }, 00:14:45.654 { 00:14:45.654 "name": "BaseBdev3", 00:14:45.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.654 "is_configured": false, 00:14:45.654 "data_offset": 0, 00:14:45.654 "data_size": 0 00:14:45.654 }, 00:14:45.654 { 00:14:45.654 "name": "BaseBdev4", 00:14:45.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.654 "is_configured": false, 00:14:45.654 "data_offset": 0, 00:14:45.654 "data_size": 0 00:14:45.654 } 00:14:45.654 ] 00:14:45.654 }' 00:14:45.654 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.654 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.223 [2024-11-27 14:14:16.964909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.223 BaseBdev3 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.223 14:14:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.223 [ 00:14:46.223 { 00:14:46.223 "name": "BaseBdev3", 00:14:46.223 "aliases": [ 00:14:46.223 "248a4ea3-7b3d-466f-a8cd-791d136a1735" 00:14:46.223 ], 00:14:46.223 "product_name": "Malloc disk", 00:14:46.223 "block_size": 512, 00:14:46.223 "num_blocks": 65536, 00:14:46.223 "uuid": "248a4ea3-7b3d-466f-a8cd-791d136a1735", 00:14:46.223 "assigned_rate_limits": { 00:14:46.223 "rw_ios_per_sec": 0, 00:14:46.223 "rw_mbytes_per_sec": 0, 00:14:46.223 "r_mbytes_per_sec": 0, 00:14:46.223 "w_mbytes_per_sec": 0 00:14:46.223 }, 00:14:46.223 "claimed": true, 00:14:46.223 "claim_type": "exclusive_write", 00:14:46.223 "zoned": false, 00:14:46.223 "supported_io_types": { 00:14:46.223 "read": true, 00:14:46.223 "write": true, 00:14:46.223 "unmap": true, 00:14:46.223 "flush": true, 00:14:46.223 "reset": true, 00:14:46.223 "nvme_admin": false, 00:14:46.223 "nvme_io": false, 00:14:46.223 "nvme_io_md": false, 00:14:46.223 "write_zeroes": true, 00:14:46.223 "zcopy": true, 00:14:46.223 "get_zone_info": false, 00:14:46.223 "zone_management": false, 00:14:46.223 "zone_append": false, 00:14:46.223 "compare": false, 00:14:46.223 "compare_and_write": false, 00:14:46.223 "abort": true, 00:14:46.223 "seek_hole": false, 00:14:46.223 "seek_data": false, 00:14:46.223 "copy": true, 00:14:46.223 "nvme_iov_md": false 00:14:46.223 }, 00:14:46.223 "memory_domains": [ 00:14:46.223 { 00:14:46.223 "dma_device_id": "system", 00:14:46.223 "dma_device_type": 1 00:14:46.223 }, 00:14:46.223 { 00:14:46.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.223 "dma_device_type": 2 00:14:46.223 } 00:14:46.223 ], 00:14:46.223 "driver_specific": {} 00:14:46.223 } 00:14:46.223 ] 00:14:46.223 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.223 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:46.223 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.223 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.223 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:46.223 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.223 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.224 "name": "Existed_Raid", 00:14:46.224 "uuid": "2a0cf5f4-ee4d-42e0-9e64-f77c45d4b14d", 00:14:46.224 "strip_size_kb": 64, 00:14:46.224 "state": "configuring", 00:14:46.224 "raid_level": "raid0", 00:14:46.224 "superblock": true, 00:14:46.224 "num_base_bdevs": 4, 00:14:46.224 "num_base_bdevs_discovered": 3, 00:14:46.224 "num_base_bdevs_operational": 4, 00:14:46.224 "base_bdevs_list": [ 00:14:46.224 { 00:14:46.224 "name": "BaseBdev1", 00:14:46.224 "uuid": "58dbb776-358e-4718-b6d0-c4546af00f0d", 00:14:46.224 "is_configured": true, 00:14:46.224 "data_offset": 2048, 00:14:46.224 "data_size": 63488 00:14:46.224 }, 00:14:46.224 { 00:14:46.224 "name": "BaseBdev2", 00:14:46.224 "uuid": "f04c04c1-1ef2-4bb8-ba1e-7a1a36a7e923", 00:14:46.224 "is_configured": true, 00:14:46.224 "data_offset": 2048, 00:14:46.224 "data_size": 63488 00:14:46.224 }, 00:14:46.224 { 00:14:46.224 "name": "BaseBdev3", 00:14:46.224 "uuid": "248a4ea3-7b3d-466f-a8cd-791d136a1735", 00:14:46.224 "is_configured": true, 00:14:46.224 "data_offset": 2048, 00:14:46.224 "data_size": 63488 00:14:46.224 }, 00:14:46.224 { 00:14:46.224 "name": "BaseBdev4", 00:14:46.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.224 "is_configured": false, 00:14:46.224 "data_offset": 0, 00:14:46.224 "data_size": 0 00:14:46.224 } 00:14:46.224 ] 00:14:46.224 }' 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.224 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.795 [2024-11-27 14:14:17.485626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:46.795 [2024-11-27 14:14:17.485912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:46.795 [2024-11-27 14:14:17.485948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:46.795 [2024-11-27 14:14:17.486307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:46.795 [2024-11-27 14:14:17.486507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:46.795 [2024-11-27 14:14:17.486522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:46.795 BaseBdev4 00:14:46.795 [2024-11-27 14:14:17.486692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.795 [ 00:14:46.795 { 00:14:46.795 "name": "BaseBdev4", 00:14:46.795 "aliases": [ 00:14:46.795 "ccd9ebef-2bb7-40a8-9b7e-7a12b70bef54" 00:14:46.795 ], 00:14:46.795 "product_name": "Malloc disk", 00:14:46.795 "block_size": 512, 00:14:46.795 "num_blocks": 65536, 00:14:46.795 "uuid": "ccd9ebef-2bb7-40a8-9b7e-7a12b70bef54", 00:14:46.795 "assigned_rate_limits": { 00:14:46.795 "rw_ios_per_sec": 0, 00:14:46.795 "rw_mbytes_per_sec": 0, 00:14:46.795 "r_mbytes_per_sec": 0, 00:14:46.795 "w_mbytes_per_sec": 0 00:14:46.795 }, 00:14:46.795 "claimed": true, 00:14:46.795 "claim_type": "exclusive_write", 00:14:46.795 "zoned": false, 00:14:46.795 "supported_io_types": { 00:14:46.795 "read": true, 00:14:46.795 "write": true, 00:14:46.795 "unmap": true, 00:14:46.795 "flush": true, 00:14:46.795 "reset": true, 00:14:46.795 "nvme_admin": false, 00:14:46.795 "nvme_io": false, 00:14:46.795 "nvme_io_md": false, 00:14:46.795 "write_zeroes": true, 00:14:46.795 "zcopy": true, 00:14:46.795 "get_zone_info": false, 00:14:46.795 "zone_management": false, 00:14:46.795 "zone_append": false, 00:14:46.795 "compare": false, 00:14:46.795 "compare_and_write": false, 00:14:46.795 "abort": true, 00:14:46.795 "seek_hole": false, 00:14:46.795 "seek_data": false, 00:14:46.795 "copy": true, 00:14:46.795 "nvme_iov_md": false 00:14:46.795 }, 00:14:46.795 "memory_domains": [ 00:14:46.795 { 00:14:46.795 "dma_device_id": "system", 00:14:46.795 "dma_device_type": 1 00:14:46.795 }, 00:14:46.795 { 00:14:46.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.795 "dma_device_type": 2 00:14:46.795 } 00:14:46.795 ], 00:14:46.795 "driver_specific": {} 00:14:46.795 } 00:14:46.795 ] 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.795 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.795 "name": "Existed_Raid", 00:14:46.795 "uuid": "2a0cf5f4-ee4d-42e0-9e64-f77c45d4b14d", 00:14:46.795 "strip_size_kb": 64, 00:14:46.795 "state": "online", 00:14:46.795 "raid_level": "raid0", 00:14:46.795 "superblock": true, 00:14:46.795 "num_base_bdevs": 4, 00:14:46.795 "num_base_bdevs_discovered": 4, 00:14:46.795 "num_base_bdevs_operational": 4, 00:14:46.795 "base_bdevs_list": [ 00:14:46.795 { 00:14:46.795 "name": "BaseBdev1", 00:14:46.795 "uuid": "58dbb776-358e-4718-b6d0-c4546af00f0d", 00:14:46.795 "is_configured": true, 00:14:46.795 "data_offset": 2048, 00:14:46.795 "data_size": 63488 00:14:46.795 }, 00:14:46.795 { 00:14:46.795 "name": "BaseBdev2", 00:14:46.795 "uuid": "f04c04c1-1ef2-4bb8-ba1e-7a1a36a7e923", 00:14:46.795 "is_configured": true, 00:14:46.795 "data_offset": 2048, 00:14:46.795 "data_size": 63488 00:14:46.795 }, 00:14:46.795 { 00:14:46.795 "name": "BaseBdev3", 00:14:46.795 "uuid": "248a4ea3-7b3d-466f-a8cd-791d136a1735", 00:14:46.795 "is_configured": true, 00:14:46.795 "data_offset": 2048, 00:14:46.795 "data_size": 63488 00:14:46.795 }, 00:14:46.795 { 00:14:46.795 "name": "BaseBdev4", 00:14:46.795 "uuid": "ccd9ebef-2bb7-40a8-9b7e-7a12b70bef54", 00:14:46.795 "is_configured": true, 00:14:46.795 "data_offset": 2048, 00:14:46.795 "data_size": 63488 00:14:46.796 } 00:14:46.796 ] 00:14:46.796 }' 00:14:46.796 14:14:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.796 14:14:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.365 [2024-11-27 14:14:18.021218] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.365 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.365 "name": "Existed_Raid", 00:14:47.365 "aliases": [ 00:14:47.365 "2a0cf5f4-ee4d-42e0-9e64-f77c45d4b14d" 00:14:47.365 ], 00:14:47.365 "product_name": "Raid Volume", 00:14:47.365 "block_size": 512, 00:14:47.365 "num_blocks": 253952, 00:14:47.365 "uuid": "2a0cf5f4-ee4d-42e0-9e64-f77c45d4b14d", 00:14:47.365 "assigned_rate_limits": { 00:14:47.365 "rw_ios_per_sec": 0, 00:14:47.365 "rw_mbytes_per_sec": 0, 00:14:47.365 "r_mbytes_per_sec": 0, 00:14:47.365 "w_mbytes_per_sec": 0 00:14:47.365 }, 00:14:47.365 "claimed": false, 00:14:47.365 "zoned": false, 00:14:47.365 "supported_io_types": { 00:14:47.365 "read": true, 00:14:47.365 "write": true, 00:14:47.365 "unmap": true, 00:14:47.365 "flush": true, 00:14:47.365 "reset": true, 00:14:47.365 "nvme_admin": false, 00:14:47.365 "nvme_io": false, 00:14:47.365 "nvme_io_md": false, 00:14:47.365 "write_zeroes": true, 00:14:47.365 "zcopy": false, 00:14:47.365 "get_zone_info": false, 00:14:47.365 "zone_management": false, 00:14:47.365 "zone_append": false, 00:14:47.365 "compare": false, 00:14:47.365 "compare_and_write": false, 00:14:47.365 "abort": false, 00:14:47.365 "seek_hole": false, 00:14:47.365 "seek_data": false, 00:14:47.365 "copy": false, 00:14:47.365 "nvme_iov_md": false 00:14:47.365 }, 00:14:47.365 "memory_domains": [ 00:14:47.365 { 00:14:47.365 "dma_device_id": "system", 00:14:47.365 "dma_device_type": 1 00:14:47.365 }, 00:14:47.365 { 00:14:47.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.365 "dma_device_type": 2 00:14:47.365 }, 00:14:47.365 { 00:14:47.365 "dma_device_id": "system", 00:14:47.365 "dma_device_type": 1 00:14:47.365 }, 00:14:47.365 { 00:14:47.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.365 "dma_device_type": 2 00:14:47.365 }, 00:14:47.365 { 00:14:47.365 "dma_device_id": "system", 00:14:47.365 "dma_device_type": 1 00:14:47.365 }, 00:14:47.365 { 00:14:47.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.365 "dma_device_type": 2 00:14:47.365 }, 00:14:47.365 { 00:14:47.365 "dma_device_id": "system", 00:14:47.365 "dma_device_type": 1 00:14:47.365 }, 00:14:47.366 { 00:14:47.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.366 "dma_device_type": 2 00:14:47.366 } 00:14:47.366 ], 00:14:47.366 "driver_specific": { 00:14:47.366 "raid": { 00:14:47.366 "uuid": "2a0cf5f4-ee4d-42e0-9e64-f77c45d4b14d", 00:14:47.366 "strip_size_kb": 64, 00:14:47.366 "state": "online", 00:14:47.366 "raid_level": "raid0", 00:14:47.366 "superblock": true, 00:14:47.366 "num_base_bdevs": 4, 00:14:47.366 "num_base_bdevs_discovered": 4, 00:14:47.366 "num_base_bdevs_operational": 4, 00:14:47.366 "base_bdevs_list": [ 00:14:47.366 { 00:14:47.366 "name": "BaseBdev1", 00:14:47.366 "uuid": "58dbb776-358e-4718-b6d0-c4546af00f0d", 00:14:47.366 "is_configured": true, 00:14:47.366 "data_offset": 2048, 00:14:47.366 "data_size": 63488 00:14:47.366 }, 00:14:47.366 { 00:14:47.366 "name": "BaseBdev2", 00:14:47.366 "uuid": "f04c04c1-1ef2-4bb8-ba1e-7a1a36a7e923", 00:14:47.366 "is_configured": true, 00:14:47.366 "data_offset": 2048, 00:14:47.366 "data_size": 63488 00:14:47.366 }, 00:14:47.366 { 00:14:47.366 "name": "BaseBdev3", 00:14:47.366 "uuid": "248a4ea3-7b3d-466f-a8cd-791d136a1735", 00:14:47.366 "is_configured": true, 00:14:47.366 "data_offset": 2048, 00:14:47.366 "data_size": 63488 00:14:47.366 }, 00:14:47.366 { 00:14:47.366 "name": "BaseBdev4", 00:14:47.366 "uuid": "ccd9ebef-2bb7-40a8-9b7e-7a12b70bef54", 00:14:47.366 "is_configured": true, 00:14:47.366 "data_offset": 2048, 00:14:47.366 "data_size": 63488 00:14:47.366 } 00:14:47.366 ] 00:14:47.366 } 00:14:47.366 } 00:14:47.366 }' 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:47.366 BaseBdev2 00:14:47.366 BaseBdev3 00:14:47.366 BaseBdev4' 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.366 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.626 [2024-11-27 14:14:18.372363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:47.626 [2024-11-27 14:14:18.372399] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.626 [2024-11-27 14:14:18.372455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.626 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.627 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.627 "name": "Existed_Raid", 00:14:47.627 "uuid": "2a0cf5f4-ee4d-42e0-9e64-f77c45d4b14d", 00:14:47.627 "strip_size_kb": 64, 00:14:47.627 "state": "offline", 00:14:47.627 "raid_level": "raid0", 00:14:47.627 "superblock": true, 00:14:47.627 "num_base_bdevs": 4, 00:14:47.627 "num_base_bdevs_discovered": 3, 00:14:47.627 "num_base_bdevs_operational": 3, 00:14:47.627 "base_bdevs_list": [ 00:14:47.627 { 00:14:47.627 "name": null, 00:14:47.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.627 "is_configured": false, 00:14:47.627 "data_offset": 0, 00:14:47.627 "data_size": 63488 00:14:47.627 }, 00:14:47.627 { 00:14:47.627 "name": "BaseBdev2", 00:14:47.627 "uuid": "f04c04c1-1ef2-4bb8-ba1e-7a1a36a7e923", 00:14:47.627 "is_configured": true, 00:14:47.627 "data_offset": 2048, 00:14:47.627 "data_size": 63488 00:14:47.627 }, 00:14:47.627 { 00:14:47.627 "name": "BaseBdev3", 00:14:47.627 "uuid": "248a4ea3-7b3d-466f-a8cd-791d136a1735", 00:14:47.627 "is_configured": true, 00:14:47.627 "data_offset": 2048, 00:14:47.627 "data_size": 63488 00:14:47.627 }, 00:14:47.627 { 00:14:47.627 "name": "BaseBdev4", 00:14:47.627 "uuid": "ccd9ebef-2bb7-40a8-9b7e-7a12b70bef54", 00:14:47.627 "is_configured": true, 00:14:47.627 "data_offset": 2048, 00:14:47.627 "data_size": 63488 00:14:47.627 } 00:14:47.627 ] 00:14:47.627 }' 00:14:47.627 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.627 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.196 14:14:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.196 [2024-11-27 14:14:18.961261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.196 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.196 [2024-11-27 14:14:19.128339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.470 [2024-11-27 14:14:19.287383] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:48.470 [2024-11-27 14:14:19.287435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.470 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.768 BaseBdev2 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.768 [ 00:14:48.768 { 00:14:48.768 "name": "BaseBdev2", 00:14:48.768 "aliases": [ 00:14:48.768 "0c598dc5-3de0-47ca-b244-aab27292edf1" 00:14:48.768 ], 00:14:48.768 "product_name": "Malloc disk", 00:14:48.768 "block_size": 512, 00:14:48.768 "num_blocks": 65536, 00:14:48.768 "uuid": "0c598dc5-3de0-47ca-b244-aab27292edf1", 00:14:48.768 "assigned_rate_limits": { 00:14:48.768 "rw_ios_per_sec": 0, 00:14:48.768 "rw_mbytes_per_sec": 0, 00:14:48.768 "r_mbytes_per_sec": 0, 00:14:48.768 "w_mbytes_per_sec": 0 00:14:48.768 }, 00:14:48.768 "claimed": false, 00:14:48.768 "zoned": false, 00:14:48.768 "supported_io_types": { 00:14:48.768 "read": true, 00:14:48.768 "write": true, 00:14:48.768 "unmap": true, 00:14:48.768 "flush": true, 00:14:48.768 "reset": true, 00:14:48.768 "nvme_admin": false, 00:14:48.768 "nvme_io": false, 00:14:48.768 "nvme_io_md": false, 00:14:48.768 "write_zeroes": true, 00:14:48.768 "zcopy": true, 00:14:48.768 "get_zone_info": false, 00:14:48.768 "zone_management": false, 00:14:48.768 "zone_append": false, 00:14:48.768 "compare": false, 00:14:48.768 "compare_and_write": false, 00:14:48.768 "abort": true, 00:14:48.768 "seek_hole": false, 00:14:48.768 "seek_data": false, 00:14:48.768 "copy": true, 00:14:48.768 "nvme_iov_md": false 00:14:48.768 }, 00:14:48.768 "memory_domains": [ 00:14:48.768 { 00:14:48.768 "dma_device_id": "system", 00:14:48.768 "dma_device_type": 1 00:14:48.768 }, 00:14:48.768 { 00:14:48.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.768 "dma_device_type": 2 00:14:48.768 } 00:14:48.768 ], 00:14:48.768 "driver_specific": {} 00:14:48.768 } 00:14:48.768 ] 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.768 BaseBdev3 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.768 [ 00:14:48.768 { 00:14:48.768 "name": "BaseBdev3", 00:14:48.768 "aliases": [ 00:14:48.768 "85bab8b5-1d10-4a0d-af24-e5dc884d8fda" 00:14:48.768 ], 00:14:48.768 "product_name": "Malloc disk", 00:14:48.768 "block_size": 512, 00:14:48.768 "num_blocks": 65536, 00:14:48.768 "uuid": "85bab8b5-1d10-4a0d-af24-e5dc884d8fda", 00:14:48.768 "assigned_rate_limits": { 00:14:48.768 "rw_ios_per_sec": 0, 00:14:48.768 "rw_mbytes_per_sec": 0, 00:14:48.768 "r_mbytes_per_sec": 0, 00:14:48.768 "w_mbytes_per_sec": 0 00:14:48.768 }, 00:14:48.768 "claimed": false, 00:14:48.768 "zoned": false, 00:14:48.768 "supported_io_types": { 00:14:48.768 "read": true, 00:14:48.768 "write": true, 00:14:48.768 "unmap": true, 00:14:48.768 "flush": true, 00:14:48.768 "reset": true, 00:14:48.768 "nvme_admin": false, 00:14:48.768 "nvme_io": false, 00:14:48.768 "nvme_io_md": false, 00:14:48.768 "write_zeroes": true, 00:14:48.768 "zcopy": true, 00:14:48.768 "get_zone_info": false, 00:14:48.768 "zone_management": false, 00:14:48.768 "zone_append": false, 00:14:48.768 "compare": false, 00:14:48.768 "compare_and_write": false, 00:14:48.768 "abort": true, 00:14:48.768 "seek_hole": false, 00:14:48.768 "seek_data": false, 00:14:48.768 "copy": true, 00:14:48.768 "nvme_iov_md": false 00:14:48.768 }, 00:14:48.768 "memory_domains": [ 00:14:48.768 { 00:14:48.768 "dma_device_id": "system", 00:14:48.768 "dma_device_type": 1 00:14:48.768 }, 00:14:48.768 { 00:14:48.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.768 "dma_device_type": 2 00:14:48.768 } 00:14:48.768 ], 00:14:48.768 "driver_specific": {} 00:14:48.768 } 00:14:48.768 ] 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:48.768 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.769 BaseBdev4 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.769 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.769 [ 00:14:48.769 { 00:14:48.769 "name": "BaseBdev4", 00:14:48.769 "aliases": [ 00:14:48.769 "320e81fe-2530-4763-b1a2-806b77a6f2cb" 00:14:48.769 ], 00:14:48.769 "product_name": "Malloc disk", 00:14:48.769 "block_size": 512, 00:14:48.769 "num_blocks": 65536, 00:14:48.769 "uuid": "320e81fe-2530-4763-b1a2-806b77a6f2cb", 00:14:48.769 "assigned_rate_limits": { 00:14:48.769 "rw_ios_per_sec": 0, 00:14:48.769 "rw_mbytes_per_sec": 0, 00:14:48.769 "r_mbytes_per_sec": 0, 00:14:48.769 "w_mbytes_per_sec": 0 00:14:48.769 }, 00:14:48.769 "claimed": false, 00:14:48.769 "zoned": false, 00:14:48.769 "supported_io_types": { 00:14:48.769 "read": true, 00:14:48.769 "write": true, 00:14:48.769 "unmap": true, 00:14:48.769 "flush": true, 00:14:48.769 "reset": true, 00:14:48.769 "nvme_admin": false, 00:14:48.769 "nvme_io": false, 00:14:48.769 "nvme_io_md": false, 00:14:48.769 "write_zeroes": true, 00:14:48.769 "zcopy": true, 00:14:48.769 "get_zone_info": false, 00:14:48.769 "zone_management": false, 00:14:48.769 "zone_append": false, 00:14:48.769 "compare": false, 00:14:48.769 "compare_and_write": false, 00:14:48.769 "abort": true, 00:14:48.769 "seek_hole": false, 00:14:48.769 "seek_data": false, 00:14:48.769 "copy": true, 00:14:48.769 "nvme_iov_md": false 00:14:48.769 }, 00:14:48.769 "memory_domains": [ 00:14:48.769 { 00:14:48.769 "dma_device_id": "system", 00:14:48.769 "dma_device_type": 1 00:14:49.049 }, 00:14:49.049 { 00:14:49.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.049 "dma_device_type": 2 00:14:49.049 } 00:14:49.049 ], 00:14:49.049 "driver_specific": {} 00:14:49.049 } 00:14:49.049 ] 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.049 [2024-11-27 14:14:19.715832] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.049 [2024-11-27 14:14:19.715961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.049 [2024-11-27 14:14:19.716027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.049 [2024-11-27 14:14:19.718503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.049 [2024-11-27 14:14:19.718627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.049 "name": "Existed_Raid", 00:14:49.049 "uuid": "82534eac-4eb4-4910-81a2-7443cfbc0cd8", 00:14:49.049 "strip_size_kb": 64, 00:14:49.049 "state": "configuring", 00:14:49.049 "raid_level": "raid0", 00:14:49.049 "superblock": true, 00:14:49.049 "num_base_bdevs": 4, 00:14:49.049 "num_base_bdevs_discovered": 3, 00:14:49.049 "num_base_bdevs_operational": 4, 00:14:49.049 "base_bdevs_list": [ 00:14:49.049 { 00:14:49.049 "name": "BaseBdev1", 00:14:49.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.049 "is_configured": false, 00:14:49.049 "data_offset": 0, 00:14:49.049 "data_size": 0 00:14:49.049 }, 00:14:49.049 { 00:14:49.049 "name": "BaseBdev2", 00:14:49.049 "uuid": "0c598dc5-3de0-47ca-b244-aab27292edf1", 00:14:49.049 "is_configured": true, 00:14:49.049 "data_offset": 2048, 00:14:49.049 "data_size": 63488 00:14:49.049 }, 00:14:49.049 { 00:14:49.049 "name": "BaseBdev3", 00:14:49.049 "uuid": "85bab8b5-1d10-4a0d-af24-e5dc884d8fda", 00:14:49.049 "is_configured": true, 00:14:49.049 "data_offset": 2048, 00:14:49.049 "data_size": 63488 00:14:49.049 }, 00:14:49.049 { 00:14:49.049 "name": "BaseBdev4", 00:14:49.049 "uuid": "320e81fe-2530-4763-b1a2-806b77a6f2cb", 00:14:49.049 "is_configured": true, 00:14:49.049 "data_offset": 2048, 00:14:49.049 "data_size": 63488 00:14:49.049 } 00:14:49.049 ] 00:14:49.049 }' 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.049 14:14:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.309 [2024-11-27 14:14:20.171095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.309 "name": "Existed_Raid", 00:14:49.309 "uuid": "82534eac-4eb4-4910-81a2-7443cfbc0cd8", 00:14:49.309 "strip_size_kb": 64, 00:14:49.309 "state": "configuring", 00:14:49.309 "raid_level": "raid0", 00:14:49.309 "superblock": true, 00:14:49.309 "num_base_bdevs": 4, 00:14:49.309 "num_base_bdevs_discovered": 2, 00:14:49.309 "num_base_bdevs_operational": 4, 00:14:49.309 "base_bdevs_list": [ 00:14:49.309 { 00:14:49.309 "name": "BaseBdev1", 00:14:49.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.309 "is_configured": false, 00:14:49.309 "data_offset": 0, 00:14:49.309 "data_size": 0 00:14:49.309 }, 00:14:49.309 { 00:14:49.309 "name": null, 00:14:49.309 "uuid": "0c598dc5-3de0-47ca-b244-aab27292edf1", 00:14:49.309 "is_configured": false, 00:14:49.309 "data_offset": 0, 00:14:49.309 "data_size": 63488 00:14:49.309 }, 00:14:49.309 { 00:14:49.309 "name": "BaseBdev3", 00:14:49.309 "uuid": "85bab8b5-1d10-4a0d-af24-e5dc884d8fda", 00:14:49.309 "is_configured": true, 00:14:49.309 "data_offset": 2048, 00:14:49.309 "data_size": 63488 00:14:49.309 }, 00:14:49.309 { 00:14:49.309 "name": "BaseBdev4", 00:14:49.309 "uuid": "320e81fe-2530-4763-b1a2-806b77a6f2cb", 00:14:49.309 "is_configured": true, 00:14:49.309 "data_offset": 2048, 00:14:49.309 "data_size": 63488 00:14:49.309 } 00:14:49.309 ] 00:14:49.309 }' 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.309 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.878 [2024-11-27 14:14:20.725049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.878 BaseBdev1 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.878 [ 00:14:49.878 { 00:14:49.878 "name": "BaseBdev1", 00:14:49.878 "aliases": [ 00:14:49.878 "fff57c99-438d-42a1-a963-a7e59968594f" 00:14:49.878 ], 00:14:49.878 "product_name": "Malloc disk", 00:14:49.878 "block_size": 512, 00:14:49.878 "num_blocks": 65536, 00:14:49.878 "uuid": "fff57c99-438d-42a1-a963-a7e59968594f", 00:14:49.878 "assigned_rate_limits": { 00:14:49.878 "rw_ios_per_sec": 0, 00:14:49.878 "rw_mbytes_per_sec": 0, 00:14:49.878 "r_mbytes_per_sec": 0, 00:14:49.878 "w_mbytes_per_sec": 0 00:14:49.878 }, 00:14:49.878 "claimed": true, 00:14:49.878 "claim_type": "exclusive_write", 00:14:49.878 "zoned": false, 00:14:49.878 "supported_io_types": { 00:14:49.878 "read": true, 00:14:49.878 "write": true, 00:14:49.878 "unmap": true, 00:14:49.878 "flush": true, 00:14:49.878 "reset": true, 00:14:49.878 "nvme_admin": false, 00:14:49.878 "nvme_io": false, 00:14:49.878 "nvme_io_md": false, 00:14:49.878 "write_zeroes": true, 00:14:49.878 "zcopy": true, 00:14:49.878 "get_zone_info": false, 00:14:49.878 "zone_management": false, 00:14:49.878 "zone_append": false, 00:14:49.878 "compare": false, 00:14:49.878 "compare_and_write": false, 00:14:49.878 "abort": true, 00:14:49.878 "seek_hole": false, 00:14:49.878 "seek_data": false, 00:14:49.878 "copy": true, 00:14:49.878 "nvme_iov_md": false 00:14:49.878 }, 00:14:49.878 "memory_domains": [ 00:14:49.878 { 00:14:49.878 "dma_device_id": "system", 00:14:49.878 "dma_device_type": 1 00:14:49.878 }, 00:14:49.878 { 00:14:49.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.878 "dma_device_type": 2 00:14:49.878 } 00:14:49.878 ], 00:14:49.878 "driver_specific": {} 00:14:49.878 } 00:14:49.878 ] 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.878 "name": "Existed_Raid", 00:14:49.878 "uuid": "82534eac-4eb4-4910-81a2-7443cfbc0cd8", 00:14:49.878 "strip_size_kb": 64, 00:14:49.878 "state": "configuring", 00:14:49.878 "raid_level": "raid0", 00:14:49.878 "superblock": true, 00:14:49.878 "num_base_bdevs": 4, 00:14:49.878 "num_base_bdevs_discovered": 3, 00:14:49.878 "num_base_bdevs_operational": 4, 00:14:49.878 "base_bdevs_list": [ 00:14:49.878 { 00:14:49.878 "name": "BaseBdev1", 00:14:49.878 "uuid": "fff57c99-438d-42a1-a963-a7e59968594f", 00:14:49.878 "is_configured": true, 00:14:49.878 "data_offset": 2048, 00:14:49.878 "data_size": 63488 00:14:49.878 }, 00:14:49.878 { 00:14:49.878 "name": null, 00:14:49.878 "uuid": "0c598dc5-3de0-47ca-b244-aab27292edf1", 00:14:49.878 "is_configured": false, 00:14:49.878 "data_offset": 0, 00:14:49.878 "data_size": 63488 00:14:49.878 }, 00:14:49.878 { 00:14:49.878 "name": "BaseBdev3", 00:14:49.878 "uuid": "85bab8b5-1d10-4a0d-af24-e5dc884d8fda", 00:14:49.878 "is_configured": true, 00:14:49.878 "data_offset": 2048, 00:14:49.878 "data_size": 63488 00:14:49.878 }, 00:14:49.878 { 00:14:49.878 "name": "BaseBdev4", 00:14:49.878 "uuid": "320e81fe-2530-4763-b1a2-806b77a6f2cb", 00:14:49.878 "is_configured": true, 00:14:49.878 "data_offset": 2048, 00:14:49.878 "data_size": 63488 00:14:49.878 } 00:14:49.878 ] 00:14:49.878 }' 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.878 14:14:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.448 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.449 [2024-11-27 14:14:21.320268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.449 "name": "Existed_Raid", 00:14:50.449 "uuid": "82534eac-4eb4-4910-81a2-7443cfbc0cd8", 00:14:50.449 "strip_size_kb": 64, 00:14:50.449 "state": "configuring", 00:14:50.449 "raid_level": "raid0", 00:14:50.449 "superblock": true, 00:14:50.449 "num_base_bdevs": 4, 00:14:50.449 "num_base_bdevs_discovered": 2, 00:14:50.449 "num_base_bdevs_operational": 4, 00:14:50.449 "base_bdevs_list": [ 00:14:50.449 { 00:14:50.449 "name": "BaseBdev1", 00:14:50.449 "uuid": "fff57c99-438d-42a1-a963-a7e59968594f", 00:14:50.449 "is_configured": true, 00:14:50.449 "data_offset": 2048, 00:14:50.449 "data_size": 63488 00:14:50.449 }, 00:14:50.449 { 00:14:50.449 "name": null, 00:14:50.449 "uuid": "0c598dc5-3de0-47ca-b244-aab27292edf1", 00:14:50.449 "is_configured": false, 00:14:50.449 "data_offset": 0, 00:14:50.449 "data_size": 63488 00:14:50.449 }, 00:14:50.449 { 00:14:50.449 "name": null, 00:14:50.449 "uuid": "85bab8b5-1d10-4a0d-af24-e5dc884d8fda", 00:14:50.449 "is_configured": false, 00:14:50.449 "data_offset": 0, 00:14:50.449 "data_size": 63488 00:14:50.449 }, 00:14:50.449 { 00:14:50.449 "name": "BaseBdev4", 00:14:50.449 "uuid": "320e81fe-2530-4763-b1a2-806b77a6f2cb", 00:14:50.449 "is_configured": true, 00:14:50.449 "data_offset": 2048, 00:14:50.449 "data_size": 63488 00:14:50.449 } 00:14:50.449 ] 00:14:50.449 }' 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.449 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.018 [2024-11-27 14:14:21.847313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.018 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.019 "name": "Existed_Raid", 00:14:51.019 "uuid": "82534eac-4eb4-4910-81a2-7443cfbc0cd8", 00:14:51.019 "strip_size_kb": 64, 00:14:51.019 "state": "configuring", 00:14:51.019 "raid_level": "raid0", 00:14:51.019 "superblock": true, 00:14:51.019 "num_base_bdevs": 4, 00:14:51.019 "num_base_bdevs_discovered": 3, 00:14:51.019 "num_base_bdevs_operational": 4, 00:14:51.019 "base_bdevs_list": [ 00:14:51.019 { 00:14:51.019 "name": "BaseBdev1", 00:14:51.019 "uuid": "fff57c99-438d-42a1-a963-a7e59968594f", 00:14:51.019 "is_configured": true, 00:14:51.019 "data_offset": 2048, 00:14:51.019 "data_size": 63488 00:14:51.019 }, 00:14:51.019 { 00:14:51.019 "name": null, 00:14:51.019 "uuid": "0c598dc5-3de0-47ca-b244-aab27292edf1", 00:14:51.019 "is_configured": false, 00:14:51.019 "data_offset": 0, 00:14:51.019 "data_size": 63488 00:14:51.019 }, 00:14:51.019 { 00:14:51.019 "name": "BaseBdev3", 00:14:51.019 "uuid": "85bab8b5-1d10-4a0d-af24-e5dc884d8fda", 00:14:51.019 "is_configured": true, 00:14:51.019 "data_offset": 2048, 00:14:51.019 "data_size": 63488 00:14:51.019 }, 00:14:51.019 { 00:14:51.019 "name": "BaseBdev4", 00:14:51.019 "uuid": "320e81fe-2530-4763-b1a2-806b77a6f2cb", 00:14:51.019 "is_configured": true, 00:14:51.019 "data_offset": 2048, 00:14:51.019 "data_size": 63488 00:14:51.019 } 00:14:51.019 ] 00:14:51.019 }' 00:14:51.019 14:14:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.019 14:14:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.588 [2024-11-27 14:14:22.370516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.588 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.589 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.589 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.589 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.589 "name": "Existed_Raid", 00:14:51.589 "uuid": "82534eac-4eb4-4910-81a2-7443cfbc0cd8", 00:14:51.589 "strip_size_kb": 64, 00:14:51.589 "state": "configuring", 00:14:51.589 "raid_level": "raid0", 00:14:51.589 "superblock": true, 00:14:51.589 "num_base_bdevs": 4, 00:14:51.589 "num_base_bdevs_discovered": 2, 00:14:51.589 "num_base_bdevs_operational": 4, 00:14:51.589 "base_bdevs_list": [ 00:14:51.589 { 00:14:51.589 "name": null, 00:14:51.589 "uuid": "fff57c99-438d-42a1-a963-a7e59968594f", 00:14:51.589 "is_configured": false, 00:14:51.589 "data_offset": 0, 00:14:51.589 "data_size": 63488 00:14:51.589 }, 00:14:51.589 { 00:14:51.589 "name": null, 00:14:51.589 "uuid": "0c598dc5-3de0-47ca-b244-aab27292edf1", 00:14:51.589 "is_configured": false, 00:14:51.589 "data_offset": 0, 00:14:51.589 "data_size": 63488 00:14:51.589 }, 00:14:51.589 { 00:14:51.589 "name": "BaseBdev3", 00:14:51.589 "uuid": "85bab8b5-1d10-4a0d-af24-e5dc884d8fda", 00:14:51.589 "is_configured": true, 00:14:51.589 "data_offset": 2048, 00:14:51.589 "data_size": 63488 00:14:51.589 }, 00:14:51.589 { 00:14:51.589 "name": "BaseBdev4", 00:14:51.589 "uuid": "320e81fe-2530-4763-b1a2-806b77a6f2cb", 00:14:51.589 "is_configured": true, 00:14:51.589 "data_offset": 2048, 00:14:51.589 "data_size": 63488 00:14:51.589 } 00:14:51.589 ] 00:14:51.589 }' 00:14:51.589 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.589 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.158 [2024-11-27 14:14:22.966461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.158 14:14:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.158 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.158 "name": "Existed_Raid", 00:14:52.158 "uuid": "82534eac-4eb4-4910-81a2-7443cfbc0cd8", 00:14:52.158 "strip_size_kb": 64, 00:14:52.158 "state": "configuring", 00:14:52.158 "raid_level": "raid0", 00:14:52.158 "superblock": true, 00:14:52.158 "num_base_bdevs": 4, 00:14:52.158 "num_base_bdevs_discovered": 3, 00:14:52.158 "num_base_bdevs_operational": 4, 00:14:52.158 "base_bdevs_list": [ 00:14:52.158 { 00:14:52.158 "name": null, 00:14:52.158 "uuid": "fff57c99-438d-42a1-a963-a7e59968594f", 00:14:52.158 "is_configured": false, 00:14:52.159 "data_offset": 0, 00:14:52.159 "data_size": 63488 00:14:52.159 }, 00:14:52.159 { 00:14:52.159 "name": "BaseBdev2", 00:14:52.159 "uuid": "0c598dc5-3de0-47ca-b244-aab27292edf1", 00:14:52.159 "is_configured": true, 00:14:52.159 "data_offset": 2048, 00:14:52.159 "data_size": 63488 00:14:52.159 }, 00:14:52.159 { 00:14:52.159 "name": "BaseBdev3", 00:14:52.159 "uuid": "85bab8b5-1d10-4a0d-af24-e5dc884d8fda", 00:14:52.159 "is_configured": true, 00:14:52.159 "data_offset": 2048, 00:14:52.159 "data_size": 63488 00:14:52.159 }, 00:14:52.159 { 00:14:52.159 "name": "BaseBdev4", 00:14:52.159 "uuid": "320e81fe-2530-4763-b1a2-806b77a6f2cb", 00:14:52.159 "is_configured": true, 00:14:52.159 "data_offset": 2048, 00:14:52.159 "data_size": 63488 00:14:52.159 } 00:14:52.159 ] 00:14:52.159 }' 00:14:52.159 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.159 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fff57c99-438d-42a1-a963-a7e59968594f 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.733 [2024-11-27 14:14:23.513053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:52.733 [2024-11-27 14:14:23.513329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:52.733 [2024-11-27 14:14:23.513345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:52.733 [2024-11-27 14:14:23.513657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:52.733 [2024-11-27 14:14:23.513831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:52.733 [2024-11-27 14:14:23.513843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:52.733 [2024-11-27 14:14:23.514000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.733 NewBaseBdev 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.733 [ 00:14:52.733 { 00:14:52.733 "name": "NewBaseBdev", 00:14:52.733 "aliases": [ 00:14:52.733 "fff57c99-438d-42a1-a963-a7e59968594f" 00:14:52.733 ], 00:14:52.733 "product_name": "Malloc disk", 00:14:52.733 "block_size": 512, 00:14:52.733 "num_blocks": 65536, 00:14:52.733 "uuid": "fff57c99-438d-42a1-a963-a7e59968594f", 00:14:52.733 "assigned_rate_limits": { 00:14:52.733 "rw_ios_per_sec": 0, 00:14:52.733 "rw_mbytes_per_sec": 0, 00:14:52.733 "r_mbytes_per_sec": 0, 00:14:52.733 "w_mbytes_per_sec": 0 00:14:52.733 }, 00:14:52.733 "claimed": true, 00:14:52.733 "claim_type": "exclusive_write", 00:14:52.733 "zoned": false, 00:14:52.733 "supported_io_types": { 00:14:52.733 "read": true, 00:14:52.733 "write": true, 00:14:52.733 "unmap": true, 00:14:52.733 "flush": true, 00:14:52.733 "reset": true, 00:14:52.733 "nvme_admin": false, 00:14:52.733 "nvme_io": false, 00:14:52.733 "nvme_io_md": false, 00:14:52.733 "write_zeroes": true, 00:14:52.733 "zcopy": true, 00:14:52.733 "get_zone_info": false, 00:14:52.733 "zone_management": false, 00:14:52.733 "zone_append": false, 00:14:52.733 "compare": false, 00:14:52.733 "compare_and_write": false, 00:14:52.733 "abort": true, 00:14:52.733 "seek_hole": false, 00:14:52.733 "seek_data": false, 00:14:52.733 "copy": true, 00:14:52.733 "nvme_iov_md": false 00:14:52.733 }, 00:14:52.733 "memory_domains": [ 00:14:52.733 { 00:14:52.733 "dma_device_id": "system", 00:14:52.733 "dma_device_type": 1 00:14:52.733 }, 00:14:52.733 { 00:14:52.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.733 "dma_device_type": 2 00:14:52.733 } 00:14:52.733 ], 00:14:52.733 "driver_specific": {} 00:14:52.733 } 00:14:52.733 ] 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.733 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.734 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.734 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.734 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.734 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.734 "name": "Existed_Raid", 00:14:52.734 "uuid": "82534eac-4eb4-4910-81a2-7443cfbc0cd8", 00:14:52.734 "strip_size_kb": 64, 00:14:52.734 "state": "online", 00:14:52.734 "raid_level": "raid0", 00:14:52.734 "superblock": true, 00:14:52.734 "num_base_bdevs": 4, 00:14:52.734 "num_base_bdevs_discovered": 4, 00:14:52.734 "num_base_bdevs_operational": 4, 00:14:52.734 "base_bdevs_list": [ 00:14:52.734 { 00:14:52.734 "name": "NewBaseBdev", 00:14:52.734 "uuid": "fff57c99-438d-42a1-a963-a7e59968594f", 00:14:52.734 "is_configured": true, 00:14:52.734 "data_offset": 2048, 00:14:52.734 "data_size": 63488 00:14:52.734 }, 00:14:52.734 { 00:14:52.734 "name": "BaseBdev2", 00:14:52.734 "uuid": "0c598dc5-3de0-47ca-b244-aab27292edf1", 00:14:52.734 "is_configured": true, 00:14:52.734 "data_offset": 2048, 00:14:52.734 "data_size": 63488 00:14:52.734 }, 00:14:52.734 { 00:14:52.734 "name": "BaseBdev3", 00:14:52.734 "uuid": "85bab8b5-1d10-4a0d-af24-e5dc884d8fda", 00:14:52.734 "is_configured": true, 00:14:52.734 "data_offset": 2048, 00:14:52.734 "data_size": 63488 00:14:52.734 }, 00:14:52.734 { 00:14:52.734 "name": "BaseBdev4", 00:14:52.734 "uuid": "320e81fe-2530-4763-b1a2-806b77a6f2cb", 00:14:52.734 "is_configured": true, 00:14:52.734 "data_offset": 2048, 00:14:52.734 "data_size": 63488 00:14:52.734 } 00:14:52.734 ] 00:14:52.734 }' 00:14:52.734 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.734 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:53.303 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:53.303 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.303 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.303 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.303 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.303 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:53.303 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.303 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 14:14:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.303 [2024-11-27 14:14:23.976727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.303 14:14:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.303 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.303 "name": "Existed_Raid", 00:14:53.303 "aliases": [ 00:14:53.303 "82534eac-4eb4-4910-81a2-7443cfbc0cd8" 00:14:53.303 ], 00:14:53.303 "product_name": "Raid Volume", 00:14:53.303 "block_size": 512, 00:14:53.303 "num_blocks": 253952, 00:14:53.303 "uuid": "82534eac-4eb4-4910-81a2-7443cfbc0cd8", 00:14:53.303 "assigned_rate_limits": { 00:14:53.303 "rw_ios_per_sec": 0, 00:14:53.303 "rw_mbytes_per_sec": 0, 00:14:53.303 "r_mbytes_per_sec": 0, 00:14:53.303 "w_mbytes_per_sec": 0 00:14:53.303 }, 00:14:53.303 "claimed": false, 00:14:53.303 "zoned": false, 00:14:53.303 "supported_io_types": { 00:14:53.303 "read": true, 00:14:53.303 "write": true, 00:14:53.303 "unmap": true, 00:14:53.303 "flush": true, 00:14:53.303 "reset": true, 00:14:53.303 "nvme_admin": false, 00:14:53.303 "nvme_io": false, 00:14:53.303 "nvme_io_md": false, 00:14:53.303 "write_zeroes": true, 00:14:53.303 "zcopy": false, 00:14:53.303 "get_zone_info": false, 00:14:53.303 "zone_management": false, 00:14:53.303 "zone_append": false, 00:14:53.303 "compare": false, 00:14:53.303 "compare_and_write": false, 00:14:53.303 "abort": false, 00:14:53.303 "seek_hole": false, 00:14:53.303 "seek_data": false, 00:14:53.303 "copy": false, 00:14:53.303 "nvme_iov_md": false 00:14:53.303 }, 00:14:53.303 "memory_domains": [ 00:14:53.303 { 00:14:53.303 "dma_device_id": "system", 00:14:53.303 "dma_device_type": 1 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.303 "dma_device_type": 2 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "dma_device_id": "system", 00:14:53.303 "dma_device_type": 1 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.303 "dma_device_type": 2 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "dma_device_id": "system", 00:14:53.303 "dma_device_type": 1 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.303 "dma_device_type": 2 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "dma_device_id": "system", 00:14:53.303 "dma_device_type": 1 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.303 "dma_device_type": 2 00:14:53.303 } 00:14:53.303 ], 00:14:53.303 "driver_specific": { 00:14:53.303 "raid": { 00:14:53.303 "uuid": "82534eac-4eb4-4910-81a2-7443cfbc0cd8", 00:14:53.303 "strip_size_kb": 64, 00:14:53.303 "state": "online", 00:14:53.303 "raid_level": "raid0", 00:14:53.303 "superblock": true, 00:14:53.303 "num_base_bdevs": 4, 00:14:53.303 "num_base_bdevs_discovered": 4, 00:14:53.303 "num_base_bdevs_operational": 4, 00:14:53.303 "base_bdevs_list": [ 00:14:53.303 { 00:14:53.303 "name": "NewBaseBdev", 00:14:53.303 "uuid": "fff57c99-438d-42a1-a963-a7e59968594f", 00:14:53.303 "is_configured": true, 00:14:53.303 "data_offset": 2048, 00:14:53.303 "data_size": 63488 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "name": "BaseBdev2", 00:14:53.303 "uuid": "0c598dc5-3de0-47ca-b244-aab27292edf1", 00:14:53.303 "is_configured": true, 00:14:53.304 "data_offset": 2048, 00:14:53.304 "data_size": 63488 00:14:53.304 }, 00:14:53.304 { 00:14:53.304 "name": "BaseBdev3", 00:14:53.304 "uuid": "85bab8b5-1d10-4a0d-af24-e5dc884d8fda", 00:14:53.304 "is_configured": true, 00:14:53.304 "data_offset": 2048, 00:14:53.304 "data_size": 63488 00:14:53.304 }, 00:14:53.304 { 00:14:53.304 "name": "BaseBdev4", 00:14:53.304 "uuid": "320e81fe-2530-4763-b1a2-806b77a6f2cb", 00:14:53.304 "is_configured": true, 00:14:53.304 "data_offset": 2048, 00:14:53.304 "data_size": 63488 00:14:53.304 } 00:14:53.304 ] 00:14:53.304 } 00:14:53.304 } 00:14:53.304 }' 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:53.304 BaseBdev2 00:14:53.304 BaseBdev3 00:14:53.304 BaseBdev4' 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.304 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.563 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.563 [2024-11-27 14:14:24.319742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.563 [2024-11-27 14:14:24.319776] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:53.563 [2024-11-27 14:14:24.319863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.564 [2024-11-27 14:14:24.319934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.564 [2024-11-27 14:14:24.319944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70269 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70269 ']' 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70269 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70269 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70269' 00:14:53.564 killing process with pid 70269 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70269 00:14:53.564 [2024-11-27 14:14:24.368801] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.564 14:14:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70269 00:14:54.133 [2024-11-27 14:14:24.796513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.516 14:14:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:55.516 00:14:55.516 real 0m12.194s 00:14:55.516 user 0m19.296s 00:14:55.516 sys 0m2.084s 00:14:55.516 ************************************ 00:14:55.516 END TEST raid_state_function_test_sb 00:14:55.516 ************************************ 00:14:55.516 14:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.516 14:14:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.516 14:14:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:14:55.516 14:14:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:55.516 14:14:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.516 14:14:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.516 ************************************ 00:14:55.516 START TEST raid_superblock_test 00:14:55.516 ************************************ 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70949 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70949 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70949 ']' 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.516 14:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.516 [2024-11-27 14:14:26.211370] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:14:55.516 [2024-11-27 14:14:26.211590] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70949 ] 00:14:55.516 [2024-11-27 14:14:26.384607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.775 [2024-11-27 14:14:26.504785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.775 [2024-11-27 14:14:26.722314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.775 [2024-11-27 14:14:26.722355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.343 malloc1 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.343 [2024-11-27 14:14:27.148137] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:56.343 [2024-11-27 14:14:27.148283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.343 [2024-11-27 14:14:27.148333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:56.343 [2024-11-27 14:14:27.148372] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.343 [2024-11-27 14:14:27.150915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.343 [2024-11-27 14:14:27.151016] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:56.343 pt1 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.343 malloc2 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.343 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.344 [2024-11-27 14:14:27.211257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:56.344 [2024-11-27 14:14:27.211387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.344 [2024-11-27 14:14:27.211438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:56.344 [2024-11-27 14:14:27.211480] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.344 [2024-11-27 14:14:27.213976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.344 [2024-11-27 14:14:27.214061] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:56.344 pt2 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.344 malloc3 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.344 [2024-11-27 14:14:27.287037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:56.344 [2024-11-27 14:14:27.287179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.344 [2024-11-27 14:14:27.287253] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:56.344 [2024-11-27 14:14:27.287299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.344 [2024-11-27 14:14:27.289696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.344 [2024-11-27 14:14:27.289780] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:56.344 pt3 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:56.344 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.604 malloc4 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.604 [2024-11-27 14:14:27.350640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:56.604 [2024-11-27 14:14:27.350713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.604 [2024-11-27 14:14:27.350739] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:56.604 [2024-11-27 14:14:27.350749] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.604 [2024-11-27 14:14:27.353307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.604 [2024-11-27 14:14:27.353390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:56.604 pt4 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.604 [2024-11-27 14:14:27.366701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:56.604 [2024-11-27 14:14:27.368789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.604 [2024-11-27 14:14:27.368946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:56.604 [2024-11-27 14:14:27.369031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:56.604 [2024-11-27 14:14:27.369285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:56.604 [2024-11-27 14:14:27.369337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:56.604 [2024-11-27 14:14:27.369677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:56.604 [2024-11-27 14:14:27.369924] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:56.604 [2024-11-27 14:14:27.369977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:56.604 [2024-11-27 14:14:27.370248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.604 "name": "raid_bdev1", 00:14:56.604 "uuid": "e3edc31a-c4bd-4d20-9331-b8d7f1a81857", 00:14:56.604 "strip_size_kb": 64, 00:14:56.604 "state": "online", 00:14:56.604 "raid_level": "raid0", 00:14:56.604 "superblock": true, 00:14:56.604 "num_base_bdevs": 4, 00:14:56.604 "num_base_bdevs_discovered": 4, 00:14:56.604 "num_base_bdevs_operational": 4, 00:14:56.604 "base_bdevs_list": [ 00:14:56.604 { 00:14:56.604 "name": "pt1", 00:14:56.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:56.604 "is_configured": true, 00:14:56.604 "data_offset": 2048, 00:14:56.604 "data_size": 63488 00:14:56.604 }, 00:14:56.604 { 00:14:56.604 "name": "pt2", 00:14:56.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:56.604 "is_configured": true, 00:14:56.604 "data_offset": 2048, 00:14:56.604 "data_size": 63488 00:14:56.604 }, 00:14:56.604 { 00:14:56.604 "name": "pt3", 00:14:56.604 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:56.604 "is_configured": true, 00:14:56.604 "data_offset": 2048, 00:14:56.604 "data_size": 63488 00:14:56.604 }, 00:14:56.604 { 00:14:56.604 "name": "pt4", 00:14:56.604 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:56.604 "is_configured": true, 00:14:56.604 "data_offset": 2048, 00:14:56.604 "data_size": 63488 00:14:56.604 } 00:14:56.604 ] 00:14:56.604 }' 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.604 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.173 [2024-11-27 14:14:27.846208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.173 "name": "raid_bdev1", 00:14:57.173 "aliases": [ 00:14:57.173 "e3edc31a-c4bd-4d20-9331-b8d7f1a81857" 00:14:57.173 ], 00:14:57.173 "product_name": "Raid Volume", 00:14:57.173 "block_size": 512, 00:14:57.173 "num_blocks": 253952, 00:14:57.173 "uuid": "e3edc31a-c4bd-4d20-9331-b8d7f1a81857", 00:14:57.173 "assigned_rate_limits": { 00:14:57.173 "rw_ios_per_sec": 0, 00:14:57.173 "rw_mbytes_per_sec": 0, 00:14:57.173 "r_mbytes_per_sec": 0, 00:14:57.173 "w_mbytes_per_sec": 0 00:14:57.173 }, 00:14:57.173 "claimed": false, 00:14:57.173 "zoned": false, 00:14:57.173 "supported_io_types": { 00:14:57.173 "read": true, 00:14:57.173 "write": true, 00:14:57.173 "unmap": true, 00:14:57.173 "flush": true, 00:14:57.173 "reset": true, 00:14:57.173 "nvme_admin": false, 00:14:57.173 "nvme_io": false, 00:14:57.173 "nvme_io_md": false, 00:14:57.173 "write_zeroes": true, 00:14:57.173 "zcopy": false, 00:14:57.173 "get_zone_info": false, 00:14:57.173 "zone_management": false, 00:14:57.173 "zone_append": false, 00:14:57.173 "compare": false, 00:14:57.173 "compare_and_write": false, 00:14:57.173 "abort": false, 00:14:57.173 "seek_hole": false, 00:14:57.173 "seek_data": false, 00:14:57.173 "copy": false, 00:14:57.173 "nvme_iov_md": false 00:14:57.173 }, 00:14:57.173 "memory_domains": [ 00:14:57.173 { 00:14:57.173 "dma_device_id": "system", 00:14:57.173 "dma_device_type": 1 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.173 "dma_device_type": 2 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "dma_device_id": "system", 00:14:57.173 "dma_device_type": 1 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.173 "dma_device_type": 2 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "dma_device_id": "system", 00:14:57.173 "dma_device_type": 1 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.173 "dma_device_type": 2 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "dma_device_id": "system", 00:14:57.173 "dma_device_type": 1 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.173 "dma_device_type": 2 00:14:57.173 } 00:14:57.173 ], 00:14:57.173 "driver_specific": { 00:14:57.173 "raid": { 00:14:57.173 "uuid": "e3edc31a-c4bd-4d20-9331-b8d7f1a81857", 00:14:57.173 "strip_size_kb": 64, 00:14:57.173 "state": "online", 00:14:57.173 "raid_level": "raid0", 00:14:57.173 "superblock": true, 00:14:57.173 "num_base_bdevs": 4, 00:14:57.173 "num_base_bdevs_discovered": 4, 00:14:57.173 "num_base_bdevs_operational": 4, 00:14:57.173 "base_bdevs_list": [ 00:14:57.173 { 00:14:57.173 "name": "pt1", 00:14:57.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.173 "is_configured": true, 00:14:57.173 "data_offset": 2048, 00:14:57.173 "data_size": 63488 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "name": "pt2", 00:14:57.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.173 "is_configured": true, 00:14:57.173 "data_offset": 2048, 00:14:57.173 "data_size": 63488 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "name": "pt3", 00:14:57.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.173 "is_configured": true, 00:14:57.173 "data_offset": 2048, 00:14:57.173 "data_size": 63488 00:14:57.173 }, 00:14:57.173 { 00:14:57.173 "name": "pt4", 00:14:57.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.173 "is_configured": true, 00:14:57.173 "data_offset": 2048, 00:14:57.173 "data_size": 63488 00:14:57.173 } 00:14:57.173 ] 00:14:57.173 } 00:14:57.173 } 00:14:57.173 }' 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:57.173 pt2 00:14:57.173 pt3 00:14:57.173 pt4' 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.173 14:14:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.173 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.432 [2024-11-27 14:14:28.173668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e3edc31a-c4bd-4d20-9331-b8d7f1a81857 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e3edc31a-c4bd-4d20-9331-b8d7f1a81857 ']' 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.432 [2024-11-27 14:14:28.225273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.432 [2024-11-27 14:14:28.225307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.432 [2024-11-27 14:14:28.225403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.432 [2024-11-27 14:14:28.225479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.432 [2024-11-27 14:14:28.225499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:57.432 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.433 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.692 [2024-11-27 14:14:28.389047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:57.692 [2024-11-27 14:14:28.391247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:57.692 [2024-11-27 14:14:28.391307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:57.692 [2024-11-27 14:14:28.391347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:57.692 [2024-11-27 14:14:28.391401] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:57.692 [2024-11-27 14:14:28.391459] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:57.692 [2024-11-27 14:14:28.391482] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:57.692 [2024-11-27 14:14:28.391505] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:57.692 [2024-11-27 14:14:28.391521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.692 [2024-11-27 14:14:28.391538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:57.692 request: 00:14:57.692 { 00:14:57.692 "name": "raid_bdev1", 00:14:57.692 "raid_level": "raid0", 00:14:57.692 "base_bdevs": [ 00:14:57.692 "malloc1", 00:14:57.692 "malloc2", 00:14:57.692 "malloc3", 00:14:57.692 "malloc4" 00:14:57.692 ], 00:14:57.692 "strip_size_kb": 64, 00:14:57.692 "superblock": false, 00:14:57.692 "method": "bdev_raid_create", 00:14:57.692 "req_id": 1 00:14:57.692 } 00:14:57.692 Got JSON-RPC error response 00:14:57.692 response: 00:14:57.692 { 00:14:57.692 "code": -17, 00:14:57.692 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:57.692 } 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.692 [2024-11-27 14:14:28.452843] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:57.692 [2024-11-27 14:14:28.452905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:57.692 [2024-11-27 14:14:28.452928] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:57.692 [2024-11-27 14:14:28.452940] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:57.692 [2024-11-27 14:14:28.455318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:57.692 [2024-11-27 14:14:28.455357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:57.692 [2024-11-27 14:14:28.455467] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:57.692 [2024-11-27 14:14:28.455527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:57.692 pt1 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.692 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.692 "name": "raid_bdev1", 00:14:57.692 "uuid": "e3edc31a-c4bd-4d20-9331-b8d7f1a81857", 00:14:57.692 "strip_size_kb": 64, 00:14:57.692 "state": "configuring", 00:14:57.692 "raid_level": "raid0", 00:14:57.692 "superblock": true, 00:14:57.692 "num_base_bdevs": 4, 00:14:57.692 "num_base_bdevs_discovered": 1, 00:14:57.692 "num_base_bdevs_operational": 4, 00:14:57.692 "base_bdevs_list": [ 00:14:57.692 { 00:14:57.692 "name": "pt1", 00:14:57.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:57.692 "is_configured": true, 00:14:57.692 "data_offset": 2048, 00:14:57.692 "data_size": 63488 00:14:57.692 }, 00:14:57.692 { 00:14:57.692 "name": null, 00:14:57.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:57.692 "is_configured": false, 00:14:57.693 "data_offset": 2048, 00:14:57.693 "data_size": 63488 00:14:57.693 }, 00:14:57.693 { 00:14:57.693 "name": null, 00:14:57.693 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:57.693 "is_configured": false, 00:14:57.693 "data_offset": 2048, 00:14:57.693 "data_size": 63488 00:14:57.693 }, 00:14:57.693 { 00:14:57.693 "name": null, 00:14:57.693 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:57.693 "is_configured": false, 00:14:57.693 "data_offset": 2048, 00:14:57.693 "data_size": 63488 00:14:57.693 } 00:14:57.693 ] 00:14:57.693 }' 00:14:57.693 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.693 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.260 [2024-11-27 14:14:28.928244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.260 [2024-11-27 14:14:28.928328] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.260 [2024-11-27 14:14:28.928352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:58.260 [2024-11-27 14:14:28.928365] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.260 [2024-11-27 14:14:28.928880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.260 [2024-11-27 14:14:28.928926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.260 [2024-11-27 14:14:28.929025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:58.260 [2024-11-27 14:14:28.929061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.260 pt2 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.260 [2024-11-27 14:14:28.940252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.260 "name": "raid_bdev1", 00:14:58.260 "uuid": "e3edc31a-c4bd-4d20-9331-b8d7f1a81857", 00:14:58.260 "strip_size_kb": 64, 00:14:58.260 "state": "configuring", 00:14:58.260 "raid_level": "raid0", 00:14:58.260 "superblock": true, 00:14:58.260 "num_base_bdevs": 4, 00:14:58.260 "num_base_bdevs_discovered": 1, 00:14:58.260 "num_base_bdevs_operational": 4, 00:14:58.260 "base_bdevs_list": [ 00:14:58.260 { 00:14:58.260 "name": "pt1", 00:14:58.260 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:58.260 "is_configured": true, 00:14:58.260 "data_offset": 2048, 00:14:58.260 "data_size": 63488 00:14:58.260 }, 00:14:58.260 { 00:14:58.260 "name": null, 00:14:58.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.260 "is_configured": false, 00:14:58.260 "data_offset": 0, 00:14:58.260 "data_size": 63488 00:14:58.260 }, 00:14:58.260 { 00:14:58.260 "name": null, 00:14:58.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.260 "is_configured": false, 00:14:58.260 "data_offset": 2048, 00:14:58.260 "data_size": 63488 00:14:58.260 }, 00:14:58.260 { 00:14:58.260 "name": null, 00:14:58.260 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:58.260 "is_configured": false, 00:14:58.260 "data_offset": 2048, 00:14:58.260 "data_size": 63488 00:14:58.260 } 00:14:58.260 ] 00:14:58.260 }' 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.260 14:14:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.519 [2024-11-27 14:14:29.383690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:58.519 [2024-11-27 14:14:29.383780] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.519 [2024-11-27 14:14:29.383805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:58.519 [2024-11-27 14:14:29.383816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.519 [2024-11-27 14:14:29.384380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.519 [2024-11-27 14:14:29.384410] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:58.519 [2024-11-27 14:14:29.384508] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:58.519 [2024-11-27 14:14:29.384535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:58.519 pt2 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.519 [2024-11-27 14:14:29.391613] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:58.519 [2024-11-27 14:14:29.391666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.519 [2024-11-27 14:14:29.391702] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:58.519 [2024-11-27 14:14:29.391712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.519 [2024-11-27 14:14:29.392189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.519 [2024-11-27 14:14:29.392217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:58.519 [2024-11-27 14:14:29.392290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:58.519 [2024-11-27 14:14:29.392320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:58.519 pt3 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.519 [2024-11-27 14:14:29.399570] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:58.519 [2024-11-27 14:14:29.399630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.519 [2024-11-27 14:14:29.399648] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:58.519 [2024-11-27 14:14:29.399658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.519 [2024-11-27 14:14:29.400077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.519 [2024-11-27 14:14:29.400103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:58.519 [2024-11-27 14:14:29.400185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:58.519 [2024-11-27 14:14:29.400210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:58.519 [2024-11-27 14:14:29.400352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:58.519 [2024-11-27 14:14:29.400370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:58.519 [2024-11-27 14:14:29.400644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:58.519 [2024-11-27 14:14:29.400816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:58.519 [2024-11-27 14:14:29.400838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:58.519 [2024-11-27 14:14:29.400998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.519 pt4 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:58.519 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.520 "name": "raid_bdev1", 00:14:58.520 "uuid": "e3edc31a-c4bd-4d20-9331-b8d7f1a81857", 00:14:58.520 "strip_size_kb": 64, 00:14:58.520 "state": "online", 00:14:58.520 "raid_level": "raid0", 00:14:58.520 "superblock": true, 00:14:58.520 "num_base_bdevs": 4, 00:14:58.520 "num_base_bdevs_discovered": 4, 00:14:58.520 "num_base_bdevs_operational": 4, 00:14:58.520 "base_bdevs_list": [ 00:14:58.520 { 00:14:58.520 "name": "pt1", 00:14:58.520 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:58.520 "is_configured": true, 00:14:58.520 "data_offset": 2048, 00:14:58.520 "data_size": 63488 00:14:58.520 }, 00:14:58.520 { 00:14:58.520 "name": "pt2", 00:14:58.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:58.520 "is_configured": true, 00:14:58.520 "data_offset": 2048, 00:14:58.520 "data_size": 63488 00:14:58.520 }, 00:14:58.520 { 00:14:58.520 "name": "pt3", 00:14:58.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:58.520 "is_configured": true, 00:14:58.520 "data_offset": 2048, 00:14:58.520 "data_size": 63488 00:14:58.520 }, 00:14:58.520 { 00:14:58.520 "name": "pt4", 00:14:58.520 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:58.520 "is_configured": true, 00:14:58.520 "data_offset": 2048, 00:14:58.520 "data_size": 63488 00:14:58.520 } 00:14:58.520 ] 00:14:58.520 }' 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.520 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.086 [2024-11-27 14:14:29.823294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.086 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:59.086 "name": "raid_bdev1", 00:14:59.086 "aliases": [ 00:14:59.086 "e3edc31a-c4bd-4d20-9331-b8d7f1a81857" 00:14:59.086 ], 00:14:59.087 "product_name": "Raid Volume", 00:14:59.087 "block_size": 512, 00:14:59.087 "num_blocks": 253952, 00:14:59.087 "uuid": "e3edc31a-c4bd-4d20-9331-b8d7f1a81857", 00:14:59.087 "assigned_rate_limits": { 00:14:59.087 "rw_ios_per_sec": 0, 00:14:59.087 "rw_mbytes_per_sec": 0, 00:14:59.087 "r_mbytes_per_sec": 0, 00:14:59.087 "w_mbytes_per_sec": 0 00:14:59.087 }, 00:14:59.087 "claimed": false, 00:14:59.087 "zoned": false, 00:14:59.087 "supported_io_types": { 00:14:59.087 "read": true, 00:14:59.087 "write": true, 00:14:59.087 "unmap": true, 00:14:59.087 "flush": true, 00:14:59.087 "reset": true, 00:14:59.087 "nvme_admin": false, 00:14:59.087 "nvme_io": false, 00:14:59.087 "nvme_io_md": false, 00:14:59.087 "write_zeroes": true, 00:14:59.087 "zcopy": false, 00:14:59.087 "get_zone_info": false, 00:14:59.087 "zone_management": false, 00:14:59.087 "zone_append": false, 00:14:59.087 "compare": false, 00:14:59.087 "compare_and_write": false, 00:14:59.087 "abort": false, 00:14:59.087 "seek_hole": false, 00:14:59.087 "seek_data": false, 00:14:59.087 "copy": false, 00:14:59.087 "nvme_iov_md": false 00:14:59.087 }, 00:14:59.087 "memory_domains": [ 00:14:59.087 { 00:14:59.087 "dma_device_id": "system", 00:14:59.087 "dma_device_type": 1 00:14:59.087 }, 00:14:59.087 { 00:14:59.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.087 "dma_device_type": 2 00:14:59.087 }, 00:14:59.087 { 00:14:59.087 "dma_device_id": "system", 00:14:59.087 "dma_device_type": 1 00:14:59.087 }, 00:14:59.087 { 00:14:59.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.087 "dma_device_type": 2 00:14:59.087 }, 00:14:59.087 { 00:14:59.087 "dma_device_id": "system", 00:14:59.087 "dma_device_type": 1 00:14:59.087 }, 00:14:59.087 { 00:14:59.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.087 "dma_device_type": 2 00:14:59.087 }, 00:14:59.087 { 00:14:59.087 "dma_device_id": "system", 00:14:59.087 "dma_device_type": 1 00:14:59.087 }, 00:14:59.087 { 00:14:59.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.087 "dma_device_type": 2 00:14:59.087 } 00:14:59.087 ], 00:14:59.087 "driver_specific": { 00:14:59.087 "raid": { 00:14:59.087 "uuid": "e3edc31a-c4bd-4d20-9331-b8d7f1a81857", 00:14:59.087 "strip_size_kb": 64, 00:14:59.087 "state": "online", 00:14:59.087 "raid_level": "raid0", 00:14:59.087 "superblock": true, 00:14:59.087 "num_base_bdevs": 4, 00:14:59.087 "num_base_bdevs_discovered": 4, 00:14:59.087 "num_base_bdevs_operational": 4, 00:14:59.087 "base_bdevs_list": [ 00:14:59.087 { 00:14:59.087 "name": "pt1", 00:14:59.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:59.087 "is_configured": true, 00:14:59.087 "data_offset": 2048, 00:14:59.087 "data_size": 63488 00:14:59.087 }, 00:14:59.087 { 00:14:59.087 "name": "pt2", 00:14:59.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:59.087 "is_configured": true, 00:14:59.087 "data_offset": 2048, 00:14:59.087 "data_size": 63488 00:14:59.087 }, 00:14:59.087 { 00:14:59.087 "name": "pt3", 00:14:59.087 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:59.087 "is_configured": true, 00:14:59.087 "data_offset": 2048, 00:14:59.087 "data_size": 63488 00:14:59.087 }, 00:14:59.087 { 00:14:59.087 "name": "pt4", 00:14:59.087 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:59.087 "is_configured": true, 00:14:59.087 "data_offset": 2048, 00:14:59.087 "data_size": 63488 00:14:59.087 } 00:14:59.087 ] 00:14:59.087 } 00:14:59.087 } 00:14:59.087 }' 00:14:59.087 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.087 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:59.087 pt2 00:14:59.087 pt3 00:14:59.087 pt4' 00:14:59.087 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.087 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:59.087 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.087 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.087 14:14:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:59.087 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.087 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.087 14:14:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.087 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.087 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.087 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.087 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:59.087 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.087 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.087 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.087 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.344 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.344 [2024-11-27 14:14:30.154683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e3edc31a-c4bd-4d20-9331-b8d7f1a81857 '!=' e3edc31a-c4bd-4d20-9331-b8d7f1a81857 ']' 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70949 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70949 ']' 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70949 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70949 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.345 killing process with pid 70949 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70949' 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70949 00:14:59.345 [2024-11-27 14:14:30.235251] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.345 [2024-11-27 14:14:30.235362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.345 14:14:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70949 00:14:59.345 [2024-11-27 14:14:30.235443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.345 [2024-11-27 14:14:30.235458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:59.957 [2024-11-27 14:14:30.687842] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.338 14:14:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:01.338 00:15:01.338 real 0m5.851s 00:15:01.338 user 0m8.299s 00:15:01.338 sys 0m0.947s 00:15:01.338 14:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.338 14:14:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.338 ************************************ 00:15:01.338 END TEST raid_superblock_test 00:15:01.338 ************************************ 00:15:01.338 14:14:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:15:01.338 14:14:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:01.338 14:14:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.338 14:14:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.338 ************************************ 00:15:01.338 START TEST raid_read_error_test 00:15:01.338 ************************************ 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:01.338 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:01.339 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0MfxTVr2Du 00:15:01.339 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71214 00:15:01.339 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71214 00:15:01.339 14:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:01.339 14:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71214 ']' 00:15:01.339 14:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.339 14:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.339 14:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.339 14:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.339 14:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.339 [2024-11-27 14:14:32.154990] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:01.339 [2024-11-27 14:14:32.155158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71214 ] 00:15:01.598 [2024-11-27 14:14:32.320551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.598 [2024-11-27 14:14:32.456544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.858 [2024-11-27 14:14:32.680701] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.858 [2024-11-27 14:14:32.680808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.118 14:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.118 14:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.118 BaseBdev1_malloc 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.118 true 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.118 [2024-11-27 14:14:33.063492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:02.118 [2024-11-27 14:14:33.063550] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.118 [2024-11-27 14:14:33.063572] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:02.118 [2024-11-27 14:14:33.063584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.118 [2024-11-27 14:14:33.065931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.118 [2024-11-27 14:14:33.065975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:02.118 BaseBdev1 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.118 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.379 BaseBdev2_malloc 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.379 true 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.379 [2024-11-27 14:14:33.131995] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:02.379 [2024-11-27 14:14:33.132058] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.379 [2024-11-27 14:14:33.132075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:02.379 [2024-11-27 14:14:33.132086] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.379 [2024-11-27 14:14:33.134331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.379 [2024-11-27 14:14:33.134368] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:02.379 BaseBdev2 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.379 BaseBdev3_malloc 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.379 true 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.379 [2024-11-27 14:14:33.213798] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:02.379 [2024-11-27 14:14:33.213922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.379 [2024-11-27 14:14:33.213964] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:02.379 [2024-11-27 14:14:33.213978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.379 [2024-11-27 14:14:33.216554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.379 [2024-11-27 14:14:33.216599] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:02.379 BaseBdev3 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.379 BaseBdev4_malloc 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.379 true 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.379 [2024-11-27 14:14:33.283332] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:02.379 [2024-11-27 14:14:33.283388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.379 [2024-11-27 14:14:33.283405] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:02.379 [2024-11-27 14:14:33.283416] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.379 [2024-11-27 14:14:33.285646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.379 [2024-11-27 14:14:33.285729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:02.379 BaseBdev4 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.379 [2024-11-27 14:14:33.295374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.379 [2024-11-27 14:14:33.297371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.379 [2024-11-27 14:14:33.297503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.379 [2024-11-27 14:14:33.297606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:02.379 [2024-11-27 14:14:33.297857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:02.379 [2024-11-27 14:14:33.297913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:02.379 [2024-11-27 14:14:33.298217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:02.379 [2024-11-27 14:14:33.298432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:02.379 [2024-11-27 14:14:33.298478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:02.379 [2024-11-27 14:14:33.298691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.379 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:02.380 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.380 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.380 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.380 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.380 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.380 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.380 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.380 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.380 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.380 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.380 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.638 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.638 "name": "raid_bdev1", 00:15:02.638 "uuid": "bfa547ba-7d1b-490a-ad62-f7a1095e35de", 00:15:02.638 "strip_size_kb": 64, 00:15:02.638 "state": "online", 00:15:02.638 "raid_level": "raid0", 00:15:02.638 "superblock": true, 00:15:02.638 "num_base_bdevs": 4, 00:15:02.638 "num_base_bdevs_discovered": 4, 00:15:02.638 "num_base_bdevs_operational": 4, 00:15:02.638 "base_bdevs_list": [ 00:15:02.638 { 00:15:02.639 "name": "BaseBdev1", 00:15:02.639 "uuid": "d11237c6-e7c6-5e1d-bbdf-17aee9f28c2b", 00:15:02.639 "is_configured": true, 00:15:02.639 "data_offset": 2048, 00:15:02.639 "data_size": 63488 00:15:02.639 }, 00:15:02.639 { 00:15:02.639 "name": "BaseBdev2", 00:15:02.639 "uuid": "d749ad6c-c31c-5524-ad55-451c8d7b1def", 00:15:02.639 "is_configured": true, 00:15:02.639 "data_offset": 2048, 00:15:02.639 "data_size": 63488 00:15:02.639 }, 00:15:02.639 { 00:15:02.639 "name": "BaseBdev3", 00:15:02.639 "uuid": "6578a9e5-cd26-54b8-8739-d9d548e9ffee", 00:15:02.639 "is_configured": true, 00:15:02.639 "data_offset": 2048, 00:15:02.639 "data_size": 63488 00:15:02.639 }, 00:15:02.639 { 00:15:02.639 "name": "BaseBdev4", 00:15:02.639 "uuid": "9d45818c-4098-551f-98c2-94d234121537", 00:15:02.639 "is_configured": true, 00:15:02.639 "data_offset": 2048, 00:15:02.639 "data_size": 63488 00:15:02.639 } 00:15:02.639 ] 00:15:02.639 }' 00:15:02.639 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.639 14:14:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.897 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:02.897 14:14:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:02.897 [2024-11-27 14:14:33.795906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.899 "name": "raid_bdev1", 00:15:03.899 "uuid": "bfa547ba-7d1b-490a-ad62-f7a1095e35de", 00:15:03.899 "strip_size_kb": 64, 00:15:03.899 "state": "online", 00:15:03.899 "raid_level": "raid0", 00:15:03.899 "superblock": true, 00:15:03.899 "num_base_bdevs": 4, 00:15:03.899 "num_base_bdevs_discovered": 4, 00:15:03.899 "num_base_bdevs_operational": 4, 00:15:03.899 "base_bdevs_list": [ 00:15:03.899 { 00:15:03.899 "name": "BaseBdev1", 00:15:03.899 "uuid": "d11237c6-e7c6-5e1d-bbdf-17aee9f28c2b", 00:15:03.899 "is_configured": true, 00:15:03.899 "data_offset": 2048, 00:15:03.899 "data_size": 63488 00:15:03.899 }, 00:15:03.899 { 00:15:03.899 "name": "BaseBdev2", 00:15:03.899 "uuid": "d749ad6c-c31c-5524-ad55-451c8d7b1def", 00:15:03.899 "is_configured": true, 00:15:03.899 "data_offset": 2048, 00:15:03.899 "data_size": 63488 00:15:03.899 }, 00:15:03.899 { 00:15:03.899 "name": "BaseBdev3", 00:15:03.899 "uuid": "6578a9e5-cd26-54b8-8739-d9d548e9ffee", 00:15:03.899 "is_configured": true, 00:15:03.899 "data_offset": 2048, 00:15:03.899 "data_size": 63488 00:15:03.899 }, 00:15:03.899 { 00:15:03.899 "name": "BaseBdev4", 00:15:03.899 "uuid": "9d45818c-4098-551f-98c2-94d234121537", 00:15:03.899 "is_configured": true, 00:15:03.899 "data_offset": 2048, 00:15:03.899 "data_size": 63488 00:15:03.899 } 00:15:03.899 ] 00:15:03.899 }' 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.899 14:14:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.469 [2024-11-27 14:14:35.173003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.469 [2024-11-27 14:14:35.173128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.469 [2024-11-27 14:14:35.176395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.469 [2024-11-27 14:14:35.176513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.469 [2024-11-27 14:14:35.176587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.469 [2024-11-27 14:14:35.176647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:04.469 { 00:15:04.469 "results": [ 00:15:04.469 { 00:15:04.469 "job": "raid_bdev1", 00:15:04.469 "core_mask": "0x1", 00:15:04.469 "workload": "randrw", 00:15:04.469 "percentage": 50, 00:15:04.469 "status": "finished", 00:15:04.469 "queue_depth": 1, 00:15:04.469 "io_size": 131072, 00:15:04.469 "runtime": 1.377723, 00:15:04.469 "iops": 13932.408764316195, 00:15:04.469 "mibps": 1741.5510955395243, 00:15:04.469 "io_failed": 1, 00:15:04.469 "io_timeout": 0, 00:15:04.469 "avg_latency_us": 99.4875067676945, 00:15:04.469 "min_latency_us": 28.28296943231441, 00:15:04.469 "max_latency_us": 1638.4 00:15:04.469 } 00:15:04.469 ], 00:15:04.469 "core_count": 1 00:15:04.469 } 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71214 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71214 ']' 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71214 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71214 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:04.469 killing process with pid 71214 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71214' 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71214 00:15:04.469 14:14:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71214 00:15:04.469 [2024-11-27 14:14:35.216166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:04.729 [2024-11-27 14:14:35.586802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.107 14:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0MfxTVr2Du 00:15:06.107 14:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:06.107 14:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:06.107 14:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:15:06.107 14:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:06.107 14:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:06.107 14:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:06.107 ************************************ 00:15:06.107 END TEST raid_read_error_test 00:15:06.107 ************************************ 00:15:06.107 14:14:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:15:06.107 00:15:06.107 real 0m4.936s 00:15:06.107 user 0m5.741s 00:15:06.107 sys 0m0.614s 00:15:06.107 14:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.107 14:14:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.107 14:14:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:15:06.107 14:14:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:06.107 14:14:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.107 14:14:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:06.107 ************************************ 00:15:06.107 START TEST raid_write_error_test 00:15:06.107 ************************************ 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:06.107 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:06.366 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.s2IDfKYQBI 00:15:06.366 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71367 00:15:06.366 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:06.366 14:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71367 00:15:06.366 14:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71367 ']' 00:15:06.366 14:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.366 14:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.366 14:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.366 14:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.366 14:14:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.366 [2024-11-27 14:14:37.153652] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:06.366 [2024-11-27 14:14:37.153880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71367 ] 00:15:06.366 [2024-11-27 14:14:37.315549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.625 [2024-11-27 14:14:37.471537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.884 [2024-11-27 14:14:37.696545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.884 [2024-11-27 14:14:37.696642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.143 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.143 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:07.143 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:07.143 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:07.143 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.143 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.404 BaseBdev1_malloc 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.404 true 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.404 [2024-11-27 14:14:38.126149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:07.404 [2024-11-27 14:14:38.126221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.404 [2024-11-27 14:14:38.126243] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:07.404 [2024-11-27 14:14:38.126254] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.404 [2024-11-27 14:14:38.128451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.404 [2024-11-27 14:14:38.128497] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:07.404 BaseBdev1 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.404 BaseBdev2_malloc 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.404 true 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.404 [2024-11-27 14:14:38.194075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:07.404 [2024-11-27 14:14:38.194186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.404 [2024-11-27 14:14:38.194210] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:07.404 [2024-11-27 14:14:38.194221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.404 [2024-11-27 14:14:38.196587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.404 [2024-11-27 14:14:38.196682] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:07.404 BaseBdev2 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.404 BaseBdev3_malloc 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.404 true 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.404 [2024-11-27 14:14:38.272809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:07.404 [2024-11-27 14:14:38.272874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.404 [2024-11-27 14:14:38.272898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:07.404 [2024-11-27 14:14:38.272911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.404 [2024-11-27 14:14:38.275146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.404 [2024-11-27 14:14:38.275188] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:07.404 BaseBdev3 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:07.404 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.405 BaseBdev4_malloc 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.405 true 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.405 [2024-11-27 14:14:38.335235] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:07.405 [2024-11-27 14:14:38.335293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.405 [2024-11-27 14:14:38.335314] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:07.405 [2024-11-27 14:14:38.335326] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.405 [2024-11-27 14:14:38.337701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.405 [2024-11-27 14:14:38.337746] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:07.405 BaseBdev4 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.405 [2024-11-27 14:14:38.343278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.405 [2024-11-27 14:14:38.345439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.405 [2024-11-27 14:14:38.345572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.405 [2024-11-27 14:14:38.345669] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:07.405 [2024-11-27 14:14:38.345927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:07.405 [2024-11-27 14:14:38.345982] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:07.405 [2024-11-27 14:14:38.346270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:07.405 [2024-11-27 14:14:38.346477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:07.405 [2024-11-27 14:14:38.346518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:07.405 [2024-11-27 14:14:38.346712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.405 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.665 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.665 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.665 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.665 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.665 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.665 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.665 "name": "raid_bdev1", 00:15:07.665 "uuid": "bb00e963-e2cf-4538-8af9-00f1bc863780", 00:15:07.665 "strip_size_kb": 64, 00:15:07.665 "state": "online", 00:15:07.665 "raid_level": "raid0", 00:15:07.665 "superblock": true, 00:15:07.665 "num_base_bdevs": 4, 00:15:07.665 "num_base_bdevs_discovered": 4, 00:15:07.665 "num_base_bdevs_operational": 4, 00:15:07.665 "base_bdevs_list": [ 00:15:07.665 { 00:15:07.665 "name": "BaseBdev1", 00:15:07.665 "uuid": "20f6b408-2f87-5231-82d1-3c877682d051", 00:15:07.665 "is_configured": true, 00:15:07.665 "data_offset": 2048, 00:15:07.665 "data_size": 63488 00:15:07.665 }, 00:15:07.665 { 00:15:07.665 "name": "BaseBdev2", 00:15:07.665 "uuid": "5f12805e-ce11-5aa4-88d8-3b271249c78e", 00:15:07.665 "is_configured": true, 00:15:07.665 "data_offset": 2048, 00:15:07.665 "data_size": 63488 00:15:07.665 }, 00:15:07.665 { 00:15:07.665 "name": "BaseBdev3", 00:15:07.665 "uuid": "47a8cc63-0c92-568d-95f1-ea6f66182eb1", 00:15:07.665 "is_configured": true, 00:15:07.665 "data_offset": 2048, 00:15:07.665 "data_size": 63488 00:15:07.665 }, 00:15:07.665 { 00:15:07.665 "name": "BaseBdev4", 00:15:07.665 "uuid": "22cabdbb-ed62-5814-945a-e6f3fe84ab0b", 00:15:07.665 "is_configured": true, 00:15:07.665 "data_offset": 2048, 00:15:07.665 "data_size": 63488 00:15:07.665 } 00:15:07.665 ] 00:15:07.665 }' 00:15:07.665 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.665 14:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.925 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:07.925 14:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:08.184 [2024-11-27 14:14:38.923883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.124 "name": "raid_bdev1", 00:15:09.124 "uuid": "bb00e963-e2cf-4538-8af9-00f1bc863780", 00:15:09.124 "strip_size_kb": 64, 00:15:09.124 "state": "online", 00:15:09.124 "raid_level": "raid0", 00:15:09.124 "superblock": true, 00:15:09.124 "num_base_bdevs": 4, 00:15:09.124 "num_base_bdevs_discovered": 4, 00:15:09.124 "num_base_bdevs_operational": 4, 00:15:09.124 "base_bdevs_list": [ 00:15:09.124 { 00:15:09.124 "name": "BaseBdev1", 00:15:09.124 "uuid": "20f6b408-2f87-5231-82d1-3c877682d051", 00:15:09.124 "is_configured": true, 00:15:09.124 "data_offset": 2048, 00:15:09.124 "data_size": 63488 00:15:09.124 }, 00:15:09.124 { 00:15:09.124 "name": "BaseBdev2", 00:15:09.124 "uuid": "5f12805e-ce11-5aa4-88d8-3b271249c78e", 00:15:09.124 "is_configured": true, 00:15:09.124 "data_offset": 2048, 00:15:09.124 "data_size": 63488 00:15:09.124 }, 00:15:09.124 { 00:15:09.124 "name": "BaseBdev3", 00:15:09.124 "uuid": "47a8cc63-0c92-568d-95f1-ea6f66182eb1", 00:15:09.124 "is_configured": true, 00:15:09.124 "data_offset": 2048, 00:15:09.124 "data_size": 63488 00:15:09.124 }, 00:15:09.124 { 00:15:09.124 "name": "BaseBdev4", 00:15:09.124 "uuid": "22cabdbb-ed62-5814-945a-e6f3fe84ab0b", 00:15:09.124 "is_configured": true, 00:15:09.124 "data_offset": 2048, 00:15:09.124 "data_size": 63488 00:15:09.124 } 00:15:09.124 ] 00:15:09.124 }' 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.124 14:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.384 14:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.384 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.384 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.384 [2024-11-27 14:14:40.333425] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.384 [2024-11-27 14:14:40.333541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.384 [2024-11-27 14:14:40.336919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.384 [2024-11-27 14:14:40.337033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.384 [2024-11-27 14:14:40.337126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.384 [2024-11-27 14:14:40.337187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:09.671 { 00:15:09.671 "results": [ 00:15:09.671 { 00:15:09.671 "job": "raid_bdev1", 00:15:09.671 "core_mask": "0x1", 00:15:09.671 "workload": "randrw", 00:15:09.671 "percentage": 50, 00:15:09.671 "status": "finished", 00:15:09.671 "queue_depth": 1, 00:15:09.671 "io_size": 131072, 00:15:09.671 "runtime": 1.41024, 00:15:09.671 "iops": 13625.340367597004, 00:15:09.671 "mibps": 1703.1675459496255, 00:15:09.671 "io_failed": 1, 00:15:09.671 "io_timeout": 0, 00:15:09.671 "avg_latency_us": 101.6781212163081, 00:15:09.671 "min_latency_us": 27.388646288209607, 00:15:09.671 "max_latency_us": 1538.235807860262 00:15:09.671 } 00:15:09.671 ], 00:15:09.671 "core_count": 1 00:15:09.671 } 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71367 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71367 ']' 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71367 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71367 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71367' 00:15:09.671 killing process with pid 71367 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71367 00:15:09.671 14:14:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71367 00:15:09.671 [2024-11-27 14:14:40.383001] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.930 [2024-11-27 14:14:40.747515] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.308 14:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.s2IDfKYQBI 00:15:11.308 14:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:11.308 14:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:11.308 14:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:11.308 14:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:15:11.308 14:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:11.308 14:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:11.308 14:14:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:11.308 00:15:11.308 real 0m4.982s 00:15:11.308 user 0m5.948s 00:15:11.308 sys 0m0.619s 00:15:11.308 14:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.308 14:14:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.308 ************************************ 00:15:11.308 END TEST raid_write_error_test 00:15:11.308 ************************************ 00:15:11.308 14:14:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:11.308 14:14:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:15:11.308 14:14:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:11.308 14:14:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.308 14:14:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:11.308 ************************************ 00:15:11.308 START TEST raid_state_function_test 00:15:11.308 ************************************ 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:11.308 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71511 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71511' 00:15:11.309 Process raid pid: 71511 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71511 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71511 ']' 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.309 14:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.309 [2024-11-27 14:14:42.194017] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:11.309 [2024-11-27 14:14:42.194284] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.568 [2024-11-27 14:14:42.375880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.568 [2024-11-27 14:14:42.504142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.827 [2024-11-27 14:14:42.734668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.827 [2024-11-27 14:14:42.734703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.393 [2024-11-27 14:14:43.071258] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.393 [2024-11-27 14:14:43.071319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.393 [2024-11-27 14:14:43.071332] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.393 [2024-11-27 14:14:43.071343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.393 [2024-11-27 14:14:43.071351] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.393 [2024-11-27 14:14:43.071361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.393 [2024-11-27 14:14:43.071368] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:12.393 [2024-11-27 14:14:43.071378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.393 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.394 "name": "Existed_Raid", 00:15:12.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.394 "strip_size_kb": 64, 00:15:12.394 "state": "configuring", 00:15:12.394 "raid_level": "concat", 00:15:12.394 "superblock": false, 00:15:12.394 "num_base_bdevs": 4, 00:15:12.394 "num_base_bdevs_discovered": 0, 00:15:12.394 "num_base_bdevs_operational": 4, 00:15:12.394 "base_bdevs_list": [ 00:15:12.394 { 00:15:12.394 "name": "BaseBdev1", 00:15:12.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.394 "is_configured": false, 00:15:12.394 "data_offset": 0, 00:15:12.394 "data_size": 0 00:15:12.394 }, 00:15:12.394 { 00:15:12.394 "name": "BaseBdev2", 00:15:12.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.394 "is_configured": false, 00:15:12.394 "data_offset": 0, 00:15:12.394 "data_size": 0 00:15:12.394 }, 00:15:12.394 { 00:15:12.394 "name": "BaseBdev3", 00:15:12.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.394 "is_configured": false, 00:15:12.394 "data_offset": 0, 00:15:12.394 "data_size": 0 00:15:12.394 }, 00:15:12.394 { 00:15:12.394 "name": "BaseBdev4", 00:15:12.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.394 "is_configured": false, 00:15:12.394 "data_offset": 0, 00:15:12.394 "data_size": 0 00:15:12.394 } 00:15:12.394 ] 00:15:12.394 }' 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.394 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.652 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:12.652 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.652 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.652 [2024-11-27 14:14:43.558356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.652 [2024-11-27 14:14:43.558472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:12.652 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.652 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:12.652 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.652 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.652 [2024-11-27 14:14:43.570322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.652 [2024-11-27 14:14:43.570412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.652 [2024-11-27 14:14:43.570448] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.652 [2024-11-27 14:14:43.570476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.652 [2024-11-27 14:14:43.570497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.652 [2024-11-27 14:14:43.570529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.652 [2024-11-27 14:14:43.570552] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:12.652 [2024-11-27 14:14:43.570620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:12.653 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.653 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.653 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.653 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.912 [2024-11-27 14:14:43.623951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.912 BaseBdev1 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.912 [ 00:15:12.912 { 00:15:12.912 "name": "BaseBdev1", 00:15:12.912 "aliases": [ 00:15:12.912 "99b722fd-1442-4edb-8a43-c6015fc4b279" 00:15:12.912 ], 00:15:12.912 "product_name": "Malloc disk", 00:15:12.912 "block_size": 512, 00:15:12.912 "num_blocks": 65536, 00:15:12.912 "uuid": "99b722fd-1442-4edb-8a43-c6015fc4b279", 00:15:12.912 "assigned_rate_limits": { 00:15:12.912 "rw_ios_per_sec": 0, 00:15:12.912 "rw_mbytes_per_sec": 0, 00:15:12.912 "r_mbytes_per_sec": 0, 00:15:12.912 "w_mbytes_per_sec": 0 00:15:12.912 }, 00:15:12.912 "claimed": true, 00:15:12.912 "claim_type": "exclusive_write", 00:15:12.912 "zoned": false, 00:15:12.912 "supported_io_types": { 00:15:12.912 "read": true, 00:15:12.912 "write": true, 00:15:12.912 "unmap": true, 00:15:12.912 "flush": true, 00:15:12.912 "reset": true, 00:15:12.912 "nvme_admin": false, 00:15:12.912 "nvme_io": false, 00:15:12.912 "nvme_io_md": false, 00:15:12.912 "write_zeroes": true, 00:15:12.912 "zcopy": true, 00:15:12.912 "get_zone_info": false, 00:15:12.912 "zone_management": false, 00:15:12.912 "zone_append": false, 00:15:12.912 "compare": false, 00:15:12.912 "compare_and_write": false, 00:15:12.912 "abort": true, 00:15:12.912 "seek_hole": false, 00:15:12.912 "seek_data": false, 00:15:12.912 "copy": true, 00:15:12.912 "nvme_iov_md": false 00:15:12.912 }, 00:15:12.912 "memory_domains": [ 00:15:12.912 { 00:15:12.912 "dma_device_id": "system", 00:15:12.912 "dma_device_type": 1 00:15:12.912 }, 00:15:12.912 { 00:15:12.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.912 "dma_device_type": 2 00:15:12.912 } 00:15:12.912 ], 00:15:12.912 "driver_specific": {} 00:15:12.912 } 00:15:12.912 ] 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.912 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.912 "name": "Existed_Raid", 00:15:12.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.912 "strip_size_kb": 64, 00:15:12.912 "state": "configuring", 00:15:12.912 "raid_level": "concat", 00:15:12.912 "superblock": false, 00:15:12.912 "num_base_bdevs": 4, 00:15:12.912 "num_base_bdevs_discovered": 1, 00:15:12.912 "num_base_bdevs_operational": 4, 00:15:12.912 "base_bdevs_list": [ 00:15:12.912 { 00:15:12.912 "name": "BaseBdev1", 00:15:12.912 "uuid": "99b722fd-1442-4edb-8a43-c6015fc4b279", 00:15:12.912 "is_configured": true, 00:15:12.912 "data_offset": 0, 00:15:12.912 "data_size": 65536 00:15:12.912 }, 00:15:12.912 { 00:15:12.912 "name": "BaseBdev2", 00:15:12.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.912 "is_configured": false, 00:15:12.912 "data_offset": 0, 00:15:12.912 "data_size": 0 00:15:12.912 }, 00:15:12.912 { 00:15:12.912 "name": "BaseBdev3", 00:15:12.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.912 "is_configured": false, 00:15:12.912 "data_offset": 0, 00:15:12.912 "data_size": 0 00:15:12.912 }, 00:15:12.912 { 00:15:12.912 "name": "BaseBdev4", 00:15:12.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.912 "is_configured": false, 00:15:12.912 "data_offset": 0, 00:15:12.912 "data_size": 0 00:15:12.912 } 00:15:12.913 ] 00:15:12.913 }' 00:15:12.913 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.913 14:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.172 [2024-11-27 14:14:44.095272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.172 [2024-11-27 14:14:44.095336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.172 [2024-11-27 14:14:44.103309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.172 [2024-11-27 14:14:44.105372] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.172 [2024-11-27 14:14:44.105487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.172 [2024-11-27 14:14:44.105507] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.172 [2024-11-27 14:14:44.105522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.172 [2024-11-27 14:14:44.105532] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:13.172 [2024-11-27 14:14:44.105543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:13.172 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.173 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.432 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.432 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.432 "name": "Existed_Raid", 00:15:13.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.432 "strip_size_kb": 64, 00:15:13.432 "state": "configuring", 00:15:13.432 "raid_level": "concat", 00:15:13.432 "superblock": false, 00:15:13.432 "num_base_bdevs": 4, 00:15:13.432 "num_base_bdevs_discovered": 1, 00:15:13.432 "num_base_bdevs_operational": 4, 00:15:13.432 "base_bdevs_list": [ 00:15:13.432 { 00:15:13.432 "name": "BaseBdev1", 00:15:13.432 "uuid": "99b722fd-1442-4edb-8a43-c6015fc4b279", 00:15:13.432 "is_configured": true, 00:15:13.432 "data_offset": 0, 00:15:13.432 "data_size": 65536 00:15:13.432 }, 00:15:13.432 { 00:15:13.432 "name": "BaseBdev2", 00:15:13.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.432 "is_configured": false, 00:15:13.432 "data_offset": 0, 00:15:13.432 "data_size": 0 00:15:13.432 }, 00:15:13.432 { 00:15:13.432 "name": "BaseBdev3", 00:15:13.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.432 "is_configured": false, 00:15:13.432 "data_offset": 0, 00:15:13.432 "data_size": 0 00:15:13.432 }, 00:15:13.432 { 00:15:13.432 "name": "BaseBdev4", 00:15:13.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.432 "is_configured": false, 00:15:13.432 "data_offset": 0, 00:15:13.432 "data_size": 0 00:15:13.432 } 00:15:13.432 ] 00:15:13.432 }' 00:15:13.432 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.432 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.693 [2024-11-27 14:14:44.622104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.693 BaseBdev2 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.693 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.953 [ 00:15:13.953 { 00:15:13.953 "name": "BaseBdev2", 00:15:13.954 "aliases": [ 00:15:13.954 "3e0e1aef-744d-4772-8dee-745313f7a4fe" 00:15:13.954 ], 00:15:13.954 "product_name": "Malloc disk", 00:15:13.954 "block_size": 512, 00:15:13.954 "num_blocks": 65536, 00:15:13.954 "uuid": "3e0e1aef-744d-4772-8dee-745313f7a4fe", 00:15:13.954 "assigned_rate_limits": { 00:15:13.954 "rw_ios_per_sec": 0, 00:15:13.954 "rw_mbytes_per_sec": 0, 00:15:13.954 "r_mbytes_per_sec": 0, 00:15:13.954 "w_mbytes_per_sec": 0 00:15:13.954 }, 00:15:13.954 "claimed": true, 00:15:13.954 "claim_type": "exclusive_write", 00:15:13.954 "zoned": false, 00:15:13.954 "supported_io_types": { 00:15:13.954 "read": true, 00:15:13.954 "write": true, 00:15:13.954 "unmap": true, 00:15:13.954 "flush": true, 00:15:13.954 "reset": true, 00:15:13.954 "nvme_admin": false, 00:15:13.954 "nvme_io": false, 00:15:13.954 "nvme_io_md": false, 00:15:13.954 "write_zeroes": true, 00:15:13.954 "zcopy": true, 00:15:13.954 "get_zone_info": false, 00:15:13.954 "zone_management": false, 00:15:13.954 "zone_append": false, 00:15:13.954 "compare": false, 00:15:13.954 "compare_and_write": false, 00:15:13.954 "abort": true, 00:15:13.954 "seek_hole": false, 00:15:13.954 "seek_data": false, 00:15:13.954 "copy": true, 00:15:13.954 "nvme_iov_md": false 00:15:13.954 }, 00:15:13.954 "memory_domains": [ 00:15:13.954 { 00:15:13.954 "dma_device_id": "system", 00:15:13.954 "dma_device_type": 1 00:15:13.954 }, 00:15:13.954 { 00:15:13.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.954 "dma_device_type": 2 00:15:13.954 } 00:15:13.954 ], 00:15:13.954 "driver_specific": {} 00:15:13.954 } 00:15:13.954 ] 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.954 "name": "Existed_Raid", 00:15:13.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.954 "strip_size_kb": 64, 00:15:13.954 "state": "configuring", 00:15:13.954 "raid_level": "concat", 00:15:13.954 "superblock": false, 00:15:13.954 "num_base_bdevs": 4, 00:15:13.954 "num_base_bdevs_discovered": 2, 00:15:13.954 "num_base_bdevs_operational": 4, 00:15:13.954 "base_bdevs_list": [ 00:15:13.954 { 00:15:13.954 "name": "BaseBdev1", 00:15:13.954 "uuid": "99b722fd-1442-4edb-8a43-c6015fc4b279", 00:15:13.954 "is_configured": true, 00:15:13.954 "data_offset": 0, 00:15:13.954 "data_size": 65536 00:15:13.954 }, 00:15:13.954 { 00:15:13.954 "name": "BaseBdev2", 00:15:13.954 "uuid": "3e0e1aef-744d-4772-8dee-745313f7a4fe", 00:15:13.954 "is_configured": true, 00:15:13.954 "data_offset": 0, 00:15:13.954 "data_size": 65536 00:15:13.954 }, 00:15:13.954 { 00:15:13.954 "name": "BaseBdev3", 00:15:13.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.954 "is_configured": false, 00:15:13.954 "data_offset": 0, 00:15:13.954 "data_size": 0 00:15:13.954 }, 00:15:13.954 { 00:15:13.954 "name": "BaseBdev4", 00:15:13.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.954 "is_configured": false, 00:15:13.954 "data_offset": 0, 00:15:13.954 "data_size": 0 00:15:13.954 } 00:15:13.954 ] 00:15:13.954 }' 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.954 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.214 [2024-11-27 14:14:45.150458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.214 BaseBdev3 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.214 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.473 [ 00:15:14.473 { 00:15:14.473 "name": "BaseBdev3", 00:15:14.473 "aliases": [ 00:15:14.473 "cae90b40-d679-4f23-9ecd-7abfe6e97726" 00:15:14.473 ], 00:15:14.473 "product_name": "Malloc disk", 00:15:14.473 "block_size": 512, 00:15:14.473 "num_blocks": 65536, 00:15:14.473 "uuid": "cae90b40-d679-4f23-9ecd-7abfe6e97726", 00:15:14.473 "assigned_rate_limits": { 00:15:14.473 "rw_ios_per_sec": 0, 00:15:14.473 "rw_mbytes_per_sec": 0, 00:15:14.473 "r_mbytes_per_sec": 0, 00:15:14.473 "w_mbytes_per_sec": 0 00:15:14.473 }, 00:15:14.473 "claimed": true, 00:15:14.473 "claim_type": "exclusive_write", 00:15:14.473 "zoned": false, 00:15:14.473 "supported_io_types": { 00:15:14.473 "read": true, 00:15:14.473 "write": true, 00:15:14.473 "unmap": true, 00:15:14.473 "flush": true, 00:15:14.473 "reset": true, 00:15:14.473 "nvme_admin": false, 00:15:14.473 "nvme_io": false, 00:15:14.473 "nvme_io_md": false, 00:15:14.473 "write_zeroes": true, 00:15:14.473 "zcopy": true, 00:15:14.473 "get_zone_info": false, 00:15:14.473 "zone_management": false, 00:15:14.473 "zone_append": false, 00:15:14.473 "compare": false, 00:15:14.473 "compare_and_write": false, 00:15:14.473 "abort": true, 00:15:14.473 "seek_hole": false, 00:15:14.473 "seek_data": false, 00:15:14.473 "copy": true, 00:15:14.473 "nvme_iov_md": false 00:15:14.473 }, 00:15:14.473 "memory_domains": [ 00:15:14.473 { 00:15:14.473 "dma_device_id": "system", 00:15:14.473 "dma_device_type": 1 00:15:14.473 }, 00:15:14.473 { 00:15:14.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.473 "dma_device_type": 2 00:15:14.473 } 00:15:14.473 ], 00:15:14.473 "driver_specific": {} 00:15:14.473 } 00:15:14.473 ] 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.473 "name": "Existed_Raid", 00:15:14.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.473 "strip_size_kb": 64, 00:15:14.473 "state": "configuring", 00:15:14.473 "raid_level": "concat", 00:15:14.473 "superblock": false, 00:15:14.473 "num_base_bdevs": 4, 00:15:14.473 "num_base_bdevs_discovered": 3, 00:15:14.473 "num_base_bdevs_operational": 4, 00:15:14.473 "base_bdevs_list": [ 00:15:14.473 { 00:15:14.473 "name": "BaseBdev1", 00:15:14.473 "uuid": "99b722fd-1442-4edb-8a43-c6015fc4b279", 00:15:14.473 "is_configured": true, 00:15:14.473 "data_offset": 0, 00:15:14.473 "data_size": 65536 00:15:14.473 }, 00:15:14.473 { 00:15:14.473 "name": "BaseBdev2", 00:15:14.473 "uuid": "3e0e1aef-744d-4772-8dee-745313f7a4fe", 00:15:14.473 "is_configured": true, 00:15:14.473 "data_offset": 0, 00:15:14.473 "data_size": 65536 00:15:14.473 }, 00:15:14.473 { 00:15:14.473 "name": "BaseBdev3", 00:15:14.473 "uuid": "cae90b40-d679-4f23-9ecd-7abfe6e97726", 00:15:14.473 "is_configured": true, 00:15:14.473 "data_offset": 0, 00:15:14.473 "data_size": 65536 00:15:14.473 }, 00:15:14.473 { 00:15:14.473 "name": "BaseBdev4", 00:15:14.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.473 "is_configured": false, 00:15:14.473 "data_offset": 0, 00:15:14.473 "data_size": 0 00:15:14.473 } 00:15:14.473 ] 00:15:14.473 }' 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.473 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.040 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:15.040 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.040 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.040 [2024-11-27 14:14:45.734569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:15.040 [2024-11-27 14:14:45.734626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:15.040 [2024-11-27 14:14:45.734635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:15.041 [2024-11-27 14:14:45.734924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:15.041 [2024-11-27 14:14:45.735078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:15.041 [2024-11-27 14:14:45.735090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:15.041 [2024-11-27 14:14:45.735399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.041 BaseBdev4 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.041 [ 00:15:15.041 { 00:15:15.041 "name": "BaseBdev4", 00:15:15.041 "aliases": [ 00:15:15.041 "2bdf2a0e-be42-4df3-8c6a-53899752aef0" 00:15:15.041 ], 00:15:15.041 "product_name": "Malloc disk", 00:15:15.041 "block_size": 512, 00:15:15.041 "num_blocks": 65536, 00:15:15.041 "uuid": "2bdf2a0e-be42-4df3-8c6a-53899752aef0", 00:15:15.041 "assigned_rate_limits": { 00:15:15.041 "rw_ios_per_sec": 0, 00:15:15.041 "rw_mbytes_per_sec": 0, 00:15:15.041 "r_mbytes_per_sec": 0, 00:15:15.041 "w_mbytes_per_sec": 0 00:15:15.041 }, 00:15:15.041 "claimed": true, 00:15:15.041 "claim_type": "exclusive_write", 00:15:15.041 "zoned": false, 00:15:15.041 "supported_io_types": { 00:15:15.041 "read": true, 00:15:15.041 "write": true, 00:15:15.041 "unmap": true, 00:15:15.041 "flush": true, 00:15:15.041 "reset": true, 00:15:15.041 "nvme_admin": false, 00:15:15.041 "nvme_io": false, 00:15:15.041 "nvme_io_md": false, 00:15:15.041 "write_zeroes": true, 00:15:15.041 "zcopy": true, 00:15:15.041 "get_zone_info": false, 00:15:15.041 "zone_management": false, 00:15:15.041 "zone_append": false, 00:15:15.041 "compare": false, 00:15:15.041 "compare_and_write": false, 00:15:15.041 "abort": true, 00:15:15.041 "seek_hole": false, 00:15:15.041 "seek_data": false, 00:15:15.041 "copy": true, 00:15:15.041 "nvme_iov_md": false 00:15:15.041 }, 00:15:15.041 "memory_domains": [ 00:15:15.041 { 00:15:15.041 "dma_device_id": "system", 00:15:15.041 "dma_device_type": 1 00:15:15.041 }, 00:15:15.041 { 00:15:15.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.041 "dma_device_type": 2 00:15:15.041 } 00:15:15.041 ], 00:15:15.041 "driver_specific": {} 00:15:15.041 } 00:15:15.041 ] 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.041 "name": "Existed_Raid", 00:15:15.041 "uuid": "1c69334d-d7ad-4533-b1f6-f5a22e8ffc3e", 00:15:15.041 "strip_size_kb": 64, 00:15:15.041 "state": "online", 00:15:15.041 "raid_level": "concat", 00:15:15.041 "superblock": false, 00:15:15.041 "num_base_bdevs": 4, 00:15:15.041 "num_base_bdevs_discovered": 4, 00:15:15.041 "num_base_bdevs_operational": 4, 00:15:15.041 "base_bdevs_list": [ 00:15:15.041 { 00:15:15.041 "name": "BaseBdev1", 00:15:15.041 "uuid": "99b722fd-1442-4edb-8a43-c6015fc4b279", 00:15:15.041 "is_configured": true, 00:15:15.041 "data_offset": 0, 00:15:15.041 "data_size": 65536 00:15:15.041 }, 00:15:15.041 { 00:15:15.041 "name": "BaseBdev2", 00:15:15.041 "uuid": "3e0e1aef-744d-4772-8dee-745313f7a4fe", 00:15:15.041 "is_configured": true, 00:15:15.041 "data_offset": 0, 00:15:15.041 "data_size": 65536 00:15:15.041 }, 00:15:15.041 { 00:15:15.041 "name": "BaseBdev3", 00:15:15.041 "uuid": "cae90b40-d679-4f23-9ecd-7abfe6e97726", 00:15:15.041 "is_configured": true, 00:15:15.041 "data_offset": 0, 00:15:15.041 "data_size": 65536 00:15:15.041 }, 00:15:15.041 { 00:15:15.041 "name": "BaseBdev4", 00:15:15.041 "uuid": "2bdf2a0e-be42-4df3-8c6a-53899752aef0", 00:15:15.041 "is_configured": true, 00:15:15.041 "data_offset": 0, 00:15:15.041 "data_size": 65536 00:15:15.041 } 00:15:15.041 ] 00:15:15.041 }' 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.041 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.300 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:15.300 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:15.300 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:15.300 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:15.300 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:15.300 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:15.300 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:15.300 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:15.300 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.300 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.300 [2024-11-27 14:14:46.246121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:15.561 "name": "Existed_Raid", 00:15:15.561 "aliases": [ 00:15:15.561 "1c69334d-d7ad-4533-b1f6-f5a22e8ffc3e" 00:15:15.561 ], 00:15:15.561 "product_name": "Raid Volume", 00:15:15.561 "block_size": 512, 00:15:15.561 "num_blocks": 262144, 00:15:15.561 "uuid": "1c69334d-d7ad-4533-b1f6-f5a22e8ffc3e", 00:15:15.561 "assigned_rate_limits": { 00:15:15.561 "rw_ios_per_sec": 0, 00:15:15.561 "rw_mbytes_per_sec": 0, 00:15:15.561 "r_mbytes_per_sec": 0, 00:15:15.561 "w_mbytes_per_sec": 0 00:15:15.561 }, 00:15:15.561 "claimed": false, 00:15:15.561 "zoned": false, 00:15:15.561 "supported_io_types": { 00:15:15.561 "read": true, 00:15:15.561 "write": true, 00:15:15.561 "unmap": true, 00:15:15.561 "flush": true, 00:15:15.561 "reset": true, 00:15:15.561 "nvme_admin": false, 00:15:15.561 "nvme_io": false, 00:15:15.561 "nvme_io_md": false, 00:15:15.561 "write_zeroes": true, 00:15:15.561 "zcopy": false, 00:15:15.561 "get_zone_info": false, 00:15:15.561 "zone_management": false, 00:15:15.561 "zone_append": false, 00:15:15.561 "compare": false, 00:15:15.561 "compare_and_write": false, 00:15:15.561 "abort": false, 00:15:15.561 "seek_hole": false, 00:15:15.561 "seek_data": false, 00:15:15.561 "copy": false, 00:15:15.561 "nvme_iov_md": false 00:15:15.561 }, 00:15:15.561 "memory_domains": [ 00:15:15.561 { 00:15:15.561 "dma_device_id": "system", 00:15:15.561 "dma_device_type": 1 00:15:15.561 }, 00:15:15.561 { 00:15:15.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.561 "dma_device_type": 2 00:15:15.561 }, 00:15:15.561 { 00:15:15.561 "dma_device_id": "system", 00:15:15.561 "dma_device_type": 1 00:15:15.561 }, 00:15:15.561 { 00:15:15.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.561 "dma_device_type": 2 00:15:15.561 }, 00:15:15.561 { 00:15:15.561 "dma_device_id": "system", 00:15:15.561 "dma_device_type": 1 00:15:15.561 }, 00:15:15.561 { 00:15:15.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.561 "dma_device_type": 2 00:15:15.561 }, 00:15:15.561 { 00:15:15.561 "dma_device_id": "system", 00:15:15.561 "dma_device_type": 1 00:15:15.561 }, 00:15:15.561 { 00:15:15.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.561 "dma_device_type": 2 00:15:15.561 } 00:15:15.561 ], 00:15:15.561 "driver_specific": { 00:15:15.561 "raid": { 00:15:15.561 "uuid": "1c69334d-d7ad-4533-b1f6-f5a22e8ffc3e", 00:15:15.561 "strip_size_kb": 64, 00:15:15.561 "state": "online", 00:15:15.561 "raid_level": "concat", 00:15:15.561 "superblock": false, 00:15:15.561 "num_base_bdevs": 4, 00:15:15.561 "num_base_bdevs_discovered": 4, 00:15:15.561 "num_base_bdevs_operational": 4, 00:15:15.561 "base_bdevs_list": [ 00:15:15.561 { 00:15:15.561 "name": "BaseBdev1", 00:15:15.561 "uuid": "99b722fd-1442-4edb-8a43-c6015fc4b279", 00:15:15.561 "is_configured": true, 00:15:15.561 "data_offset": 0, 00:15:15.561 "data_size": 65536 00:15:15.561 }, 00:15:15.561 { 00:15:15.561 "name": "BaseBdev2", 00:15:15.561 "uuid": "3e0e1aef-744d-4772-8dee-745313f7a4fe", 00:15:15.561 "is_configured": true, 00:15:15.561 "data_offset": 0, 00:15:15.561 "data_size": 65536 00:15:15.561 }, 00:15:15.561 { 00:15:15.561 "name": "BaseBdev3", 00:15:15.561 "uuid": "cae90b40-d679-4f23-9ecd-7abfe6e97726", 00:15:15.561 "is_configured": true, 00:15:15.561 "data_offset": 0, 00:15:15.561 "data_size": 65536 00:15:15.561 }, 00:15:15.561 { 00:15:15.561 "name": "BaseBdev4", 00:15:15.561 "uuid": "2bdf2a0e-be42-4df3-8c6a-53899752aef0", 00:15:15.561 "is_configured": true, 00:15:15.561 "data_offset": 0, 00:15:15.561 "data_size": 65536 00:15:15.561 } 00:15:15.561 ] 00:15:15.561 } 00:15:15.561 } 00:15:15.561 }' 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:15.561 BaseBdev2 00:15:15.561 BaseBdev3 00:15:15.561 BaseBdev4' 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.561 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.821 [2024-11-27 14:14:46.593344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:15.821 [2024-11-27 14:14:46.593437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.821 [2024-11-27 14:14:46.593513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.821 "name": "Existed_Raid", 00:15:15.821 "uuid": "1c69334d-d7ad-4533-b1f6-f5a22e8ffc3e", 00:15:15.821 "strip_size_kb": 64, 00:15:15.821 "state": "offline", 00:15:15.821 "raid_level": "concat", 00:15:15.821 "superblock": false, 00:15:15.821 "num_base_bdevs": 4, 00:15:15.821 "num_base_bdevs_discovered": 3, 00:15:15.821 "num_base_bdevs_operational": 3, 00:15:15.821 "base_bdevs_list": [ 00:15:15.821 { 00:15:15.821 "name": null, 00:15:15.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.821 "is_configured": false, 00:15:15.821 "data_offset": 0, 00:15:15.821 "data_size": 65536 00:15:15.821 }, 00:15:15.821 { 00:15:15.821 "name": "BaseBdev2", 00:15:15.821 "uuid": "3e0e1aef-744d-4772-8dee-745313f7a4fe", 00:15:15.821 "is_configured": true, 00:15:15.821 "data_offset": 0, 00:15:15.821 "data_size": 65536 00:15:15.821 }, 00:15:15.821 { 00:15:15.821 "name": "BaseBdev3", 00:15:15.821 "uuid": "cae90b40-d679-4f23-9ecd-7abfe6e97726", 00:15:15.821 "is_configured": true, 00:15:15.821 "data_offset": 0, 00:15:15.821 "data_size": 65536 00:15:15.821 }, 00:15:15.821 { 00:15:15.821 "name": "BaseBdev4", 00:15:15.821 "uuid": "2bdf2a0e-be42-4df3-8c6a-53899752aef0", 00:15:15.821 "is_configured": true, 00:15:15.821 "data_offset": 0, 00:15:15.821 "data_size": 65536 00:15:15.821 } 00:15:15.821 ] 00:15:15.821 }' 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.821 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.388 [2024-11-27 14:14:47.143789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.388 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.388 [2024-11-27 14:14:47.297160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.647 [2024-11-27 14:14:47.459551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:16.647 [2024-11-27 14:14:47.459609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.647 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.907 BaseBdev2 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.907 [ 00:15:16.907 { 00:15:16.907 "name": "BaseBdev2", 00:15:16.907 "aliases": [ 00:15:16.907 "4c276f09-511b-4572-83a6-416666454f00" 00:15:16.907 ], 00:15:16.907 "product_name": "Malloc disk", 00:15:16.907 "block_size": 512, 00:15:16.907 "num_blocks": 65536, 00:15:16.907 "uuid": "4c276f09-511b-4572-83a6-416666454f00", 00:15:16.907 "assigned_rate_limits": { 00:15:16.907 "rw_ios_per_sec": 0, 00:15:16.907 "rw_mbytes_per_sec": 0, 00:15:16.907 "r_mbytes_per_sec": 0, 00:15:16.907 "w_mbytes_per_sec": 0 00:15:16.907 }, 00:15:16.907 "claimed": false, 00:15:16.907 "zoned": false, 00:15:16.907 "supported_io_types": { 00:15:16.907 "read": true, 00:15:16.907 "write": true, 00:15:16.907 "unmap": true, 00:15:16.907 "flush": true, 00:15:16.907 "reset": true, 00:15:16.907 "nvme_admin": false, 00:15:16.907 "nvme_io": false, 00:15:16.907 "nvme_io_md": false, 00:15:16.907 "write_zeroes": true, 00:15:16.907 "zcopy": true, 00:15:16.907 "get_zone_info": false, 00:15:16.907 "zone_management": false, 00:15:16.907 "zone_append": false, 00:15:16.907 "compare": false, 00:15:16.907 "compare_and_write": false, 00:15:16.907 "abort": true, 00:15:16.907 "seek_hole": false, 00:15:16.907 "seek_data": false, 00:15:16.907 "copy": true, 00:15:16.907 "nvme_iov_md": false 00:15:16.907 }, 00:15:16.907 "memory_domains": [ 00:15:16.907 { 00:15:16.907 "dma_device_id": "system", 00:15:16.907 "dma_device_type": 1 00:15:16.907 }, 00:15:16.907 { 00:15:16.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.907 "dma_device_type": 2 00:15:16.907 } 00:15:16.907 ], 00:15:16.907 "driver_specific": {} 00:15:16.907 } 00:15:16.907 ] 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.907 BaseBdev3 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.907 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.907 [ 00:15:16.907 { 00:15:16.908 "name": "BaseBdev3", 00:15:16.908 "aliases": [ 00:15:16.908 "a0549a28-7d38-4662-9cac-977558a44a2c" 00:15:16.908 ], 00:15:16.908 "product_name": "Malloc disk", 00:15:16.908 "block_size": 512, 00:15:16.908 "num_blocks": 65536, 00:15:16.908 "uuid": "a0549a28-7d38-4662-9cac-977558a44a2c", 00:15:16.908 "assigned_rate_limits": { 00:15:16.908 "rw_ios_per_sec": 0, 00:15:16.908 "rw_mbytes_per_sec": 0, 00:15:16.908 "r_mbytes_per_sec": 0, 00:15:16.908 "w_mbytes_per_sec": 0 00:15:16.908 }, 00:15:16.908 "claimed": false, 00:15:16.908 "zoned": false, 00:15:16.908 "supported_io_types": { 00:15:16.908 "read": true, 00:15:16.908 "write": true, 00:15:16.908 "unmap": true, 00:15:16.908 "flush": true, 00:15:16.908 "reset": true, 00:15:16.908 "nvme_admin": false, 00:15:16.908 "nvme_io": false, 00:15:16.908 "nvme_io_md": false, 00:15:16.908 "write_zeroes": true, 00:15:16.908 "zcopy": true, 00:15:16.908 "get_zone_info": false, 00:15:16.908 "zone_management": false, 00:15:16.908 "zone_append": false, 00:15:16.908 "compare": false, 00:15:16.908 "compare_and_write": false, 00:15:16.908 "abort": true, 00:15:16.908 "seek_hole": false, 00:15:16.908 "seek_data": false, 00:15:16.908 "copy": true, 00:15:16.908 "nvme_iov_md": false 00:15:16.908 }, 00:15:16.908 "memory_domains": [ 00:15:16.908 { 00:15:16.908 "dma_device_id": "system", 00:15:16.908 "dma_device_type": 1 00:15:16.908 }, 00:15:16.908 { 00:15:16.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.908 "dma_device_type": 2 00:15:16.908 } 00:15:16.908 ], 00:15:16.908 "driver_specific": {} 00:15:16.908 } 00:15:16.908 ] 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.908 BaseBdev4 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.908 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.167 [ 00:15:17.167 { 00:15:17.167 "name": "BaseBdev4", 00:15:17.167 "aliases": [ 00:15:17.167 "5e8a6e6b-1183-448a-a371-2739c33b8946" 00:15:17.167 ], 00:15:17.167 "product_name": "Malloc disk", 00:15:17.167 "block_size": 512, 00:15:17.167 "num_blocks": 65536, 00:15:17.167 "uuid": "5e8a6e6b-1183-448a-a371-2739c33b8946", 00:15:17.167 "assigned_rate_limits": { 00:15:17.167 "rw_ios_per_sec": 0, 00:15:17.167 "rw_mbytes_per_sec": 0, 00:15:17.167 "r_mbytes_per_sec": 0, 00:15:17.167 "w_mbytes_per_sec": 0 00:15:17.167 }, 00:15:17.167 "claimed": false, 00:15:17.167 "zoned": false, 00:15:17.167 "supported_io_types": { 00:15:17.167 "read": true, 00:15:17.167 "write": true, 00:15:17.167 "unmap": true, 00:15:17.167 "flush": true, 00:15:17.167 "reset": true, 00:15:17.167 "nvme_admin": false, 00:15:17.167 "nvme_io": false, 00:15:17.167 "nvme_io_md": false, 00:15:17.167 "write_zeroes": true, 00:15:17.167 "zcopy": true, 00:15:17.167 "get_zone_info": false, 00:15:17.167 "zone_management": false, 00:15:17.167 "zone_append": false, 00:15:17.167 "compare": false, 00:15:17.167 "compare_and_write": false, 00:15:17.167 "abort": true, 00:15:17.167 "seek_hole": false, 00:15:17.167 "seek_data": false, 00:15:17.167 "copy": true, 00:15:17.167 "nvme_iov_md": false 00:15:17.167 }, 00:15:17.167 "memory_domains": [ 00:15:17.167 { 00:15:17.167 "dma_device_id": "system", 00:15:17.167 "dma_device_type": 1 00:15:17.167 }, 00:15:17.167 { 00:15:17.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.167 "dma_device_type": 2 00:15:17.167 } 00:15:17.167 ], 00:15:17.167 "driver_specific": {} 00:15:17.167 } 00:15:17.167 ] 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.167 [2024-11-27 14:14:47.880348] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:17.167 [2024-11-27 14:14:47.880468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:17.167 [2024-11-27 14:14:47.880530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:17.167 [2024-11-27 14:14:47.882670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:17.167 [2024-11-27 14:14:47.882774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.167 "name": "Existed_Raid", 00:15:17.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.167 "strip_size_kb": 64, 00:15:17.167 "state": "configuring", 00:15:17.167 "raid_level": "concat", 00:15:17.167 "superblock": false, 00:15:17.167 "num_base_bdevs": 4, 00:15:17.167 "num_base_bdevs_discovered": 3, 00:15:17.167 "num_base_bdevs_operational": 4, 00:15:17.167 "base_bdevs_list": [ 00:15:17.167 { 00:15:17.167 "name": "BaseBdev1", 00:15:17.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.167 "is_configured": false, 00:15:17.167 "data_offset": 0, 00:15:17.167 "data_size": 0 00:15:17.167 }, 00:15:17.167 { 00:15:17.167 "name": "BaseBdev2", 00:15:17.167 "uuid": "4c276f09-511b-4572-83a6-416666454f00", 00:15:17.167 "is_configured": true, 00:15:17.167 "data_offset": 0, 00:15:17.167 "data_size": 65536 00:15:17.167 }, 00:15:17.167 { 00:15:17.167 "name": "BaseBdev3", 00:15:17.167 "uuid": "a0549a28-7d38-4662-9cac-977558a44a2c", 00:15:17.167 "is_configured": true, 00:15:17.167 "data_offset": 0, 00:15:17.167 "data_size": 65536 00:15:17.167 }, 00:15:17.167 { 00:15:17.167 "name": "BaseBdev4", 00:15:17.167 "uuid": "5e8a6e6b-1183-448a-a371-2739c33b8946", 00:15:17.167 "is_configured": true, 00:15:17.167 "data_offset": 0, 00:15:17.167 "data_size": 65536 00:15:17.167 } 00:15:17.167 ] 00:15:17.167 }' 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.167 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.426 [2024-11-27 14:14:48.323588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.426 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.427 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.427 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.427 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.427 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.427 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.427 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.427 "name": "Existed_Raid", 00:15:17.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.427 "strip_size_kb": 64, 00:15:17.427 "state": "configuring", 00:15:17.427 "raid_level": "concat", 00:15:17.427 "superblock": false, 00:15:17.427 "num_base_bdevs": 4, 00:15:17.427 "num_base_bdevs_discovered": 2, 00:15:17.427 "num_base_bdevs_operational": 4, 00:15:17.427 "base_bdevs_list": [ 00:15:17.427 { 00:15:17.427 "name": "BaseBdev1", 00:15:17.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.427 "is_configured": false, 00:15:17.427 "data_offset": 0, 00:15:17.427 "data_size": 0 00:15:17.427 }, 00:15:17.427 { 00:15:17.427 "name": null, 00:15:17.427 "uuid": "4c276f09-511b-4572-83a6-416666454f00", 00:15:17.427 "is_configured": false, 00:15:17.427 "data_offset": 0, 00:15:17.427 "data_size": 65536 00:15:17.427 }, 00:15:17.427 { 00:15:17.427 "name": "BaseBdev3", 00:15:17.427 "uuid": "a0549a28-7d38-4662-9cac-977558a44a2c", 00:15:17.427 "is_configured": true, 00:15:17.427 "data_offset": 0, 00:15:17.427 "data_size": 65536 00:15:17.427 }, 00:15:17.427 { 00:15:17.427 "name": "BaseBdev4", 00:15:17.427 "uuid": "5e8a6e6b-1183-448a-a371-2739c33b8946", 00:15:17.427 "is_configured": true, 00:15:17.427 "data_offset": 0, 00:15:17.427 "data_size": 65536 00:15:17.427 } 00:15:17.427 ] 00:15:17.427 }' 00:15:17.427 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.427 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.025 [2024-11-27 14:14:48.839855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.025 BaseBdev1 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.025 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.025 [ 00:15:18.025 { 00:15:18.025 "name": "BaseBdev1", 00:15:18.025 "aliases": [ 00:15:18.025 "ba34b20c-322e-4be2-831c-59e1a84ea59e" 00:15:18.025 ], 00:15:18.025 "product_name": "Malloc disk", 00:15:18.025 "block_size": 512, 00:15:18.025 "num_blocks": 65536, 00:15:18.025 "uuid": "ba34b20c-322e-4be2-831c-59e1a84ea59e", 00:15:18.025 "assigned_rate_limits": { 00:15:18.025 "rw_ios_per_sec": 0, 00:15:18.025 "rw_mbytes_per_sec": 0, 00:15:18.025 "r_mbytes_per_sec": 0, 00:15:18.025 "w_mbytes_per_sec": 0 00:15:18.025 }, 00:15:18.025 "claimed": true, 00:15:18.025 "claim_type": "exclusive_write", 00:15:18.025 "zoned": false, 00:15:18.025 "supported_io_types": { 00:15:18.025 "read": true, 00:15:18.025 "write": true, 00:15:18.025 "unmap": true, 00:15:18.025 "flush": true, 00:15:18.025 "reset": true, 00:15:18.025 "nvme_admin": false, 00:15:18.025 "nvme_io": false, 00:15:18.025 "nvme_io_md": false, 00:15:18.025 "write_zeroes": true, 00:15:18.025 "zcopy": true, 00:15:18.025 "get_zone_info": false, 00:15:18.025 "zone_management": false, 00:15:18.025 "zone_append": false, 00:15:18.025 "compare": false, 00:15:18.025 "compare_and_write": false, 00:15:18.025 "abort": true, 00:15:18.025 "seek_hole": false, 00:15:18.025 "seek_data": false, 00:15:18.025 "copy": true, 00:15:18.025 "nvme_iov_md": false 00:15:18.025 }, 00:15:18.026 "memory_domains": [ 00:15:18.026 { 00:15:18.026 "dma_device_id": "system", 00:15:18.026 "dma_device_type": 1 00:15:18.026 }, 00:15:18.026 { 00:15:18.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.026 "dma_device_type": 2 00:15:18.026 } 00:15:18.026 ], 00:15:18.026 "driver_specific": {} 00:15:18.026 } 00:15:18.026 ] 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.026 "name": "Existed_Raid", 00:15:18.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.026 "strip_size_kb": 64, 00:15:18.026 "state": "configuring", 00:15:18.026 "raid_level": "concat", 00:15:18.026 "superblock": false, 00:15:18.026 "num_base_bdevs": 4, 00:15:18.026 "num_base_bdevs_discovered": 3, 00:15:18.026 "num_base_bdevs_operational": 4, 00:15:18.026 "base_bdevs_list": [ 00:15:18.026 { 00:15:18.026 "name": "BaseBdev1", 00:15:18.026 "uuid": "ba34b20c-322e-4be2-831c-59e1a84ea59e", 00:15:18.026 "is_configured": true, 00:15:18.026 "data_offset": 0, 00:15:18.026 "data_size": 65536 00:15:18.026 }, 00:15:18.026 { 00:15:18.026 "name": null, 00:15:18.026 "uuid": "4c276f09-511b-4572-83a6-416666454f00", 00:15:18.026 "is_configured": false, 00:15:18.026 "data_offset": 0, 00:15:18.026 "data_size": 65536 00:15:18.026 }, 00:15:18.026 { 00:15:18.026 "name": "BaseBdev3", 00:15:18.026 "uuid": "a0549a28-7d38-4662-9cac-977558a44a2c", 00:15:18.026 "is_configured": true, 00:15:18.026 "data_offset": 0, 00:15:18.026 "data_size": 65536 00:15:18.026 }, 00:15:18.026 { 00:15:18.026 "name": "BaseBdev4", 00:15:18.026 "uuid": "5e8a6e6b-1183-448a-a371-2739c33b8946", 00:15:18.026 "is_configured": true, 00:15:18.026 "data_offset": 0, 00:15:18.026 "data_size": 65536 00:15:18.026 } 00:15:18.026 ] 00:15:18.026 }' 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.026 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.594 [2024-11-27 14:14:49.403002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.594 "name": "Existed_Raid", 00:15:18.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.594 "strip_size_kb": 64, 00:15:18.594 "state": "configuring", 00:15:18.594 "raid_level": "concat", 00:15:18.594 "superblock": false, 00:15:18.594 "num_base_bdevs": 4, 00:15:18.594 "num_base_bdevs_discovered": 2, 00:15:18.594 "num_base_bdevs_operational": 4, 00:15:18.594 "base_bdevs_list": [ 00:15:18.594 { 00:15:18.594 "name": "BaseBdev1", 00:15:18.594 "uuid": "ba34b20c-322e-4be2-831c-59e1a84ea59e", 00:15:18.594 "is_configured": true, 00:15:18.594 "data_offset": 0, 00:15:18.594 "data_size": 65536 00:15:18.594 }, 00:15:18.594 { 00:15:18.594 "name": null, 00:15:18.594 "uuid": "4c276f09-511b-4572-83a6-416666454f00", 00:15:18.594 "is_configured": false, 00:15:18.594 "data_offset": 0, 00:15:18.594 "data_size": 65536 00:15:18.594 }, 00:15:18.594 { 00:15:18.594 "name": null, 00:15:18.594 "uuid": "a0549a28-7d38-4662-9cac-977558a44a2c", 00:15:18.594 "is_configured": false, 00:15:18.594 "data_offset": 0, 00:15:18.594 "data_size": 65536 00:15:18.594 }, 00:15:18.594 { 00:15:18.594 "name": "BaseBdev4", 00:15:18.594 "uuid": "5e8a6e6b-1183-448a-a371-2739c33b8946", 00:15:18.594 "is_configured": true, 00:15:18.594 "data_offset": 0, 00:15:18.594 "data_size": 65536 00:15:18.594 } 00:15:18.594 ] 00:15:18.594 }' 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.594 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.162 [2024-11-27 14:14:49.894156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.162 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.162 "name": "Existed_Raid", 00:15:19.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.162 "strip_size_kb": 64, 00:15:19.162 "state": "configuring", 00:15:19.162 "raid_level": "concat", 00:15:19.162 "superblock": false, 00:15:19.162 "num_base_bdevs": 4, 00:15:19.162 "num_base_bdevs_discovered": 3, 00:15:19.162 "num_base_bdevs_operational": 4, 00:15:19.162 "base_bdevs_list": [ 00:15:19.162 { 00:15:19.162 "name": "BaseBdev1", 00:15:19.162 "uuid": "ba34b20c-322e-4be2-831c-59e1a84ea59e", 00:15:19.162 "is_configured": true, 00:15:19.162 "data_offset": 0, 00:15:19.162 "data_size": 65536 00:15:19.162 }, 00:15:19.162 { 00:15:19.162 "name": null, 00:15:19.162 "uuid": "4c276f09-511b-4572-83a6-416666454f00", 00:15:19.162 "is_configured": false, 00:15:19.163 "data_offset": 0, 00:15:19.163 "data_size": 65536 00:15:19.163 }, 00:15:19.163 { 00:15:19.163 "name": "BaseBdev3", 00:15:19.163 "uuid": "a0549a28-7d38-4662-9cac-977558a44a2c", 00:15:19.163 "is_configured": true, 00:15:19.163 "data_offset": 0, 00:15:19.163 "data_size": 65536 00:15:19.163 }, 00:15:19.163 { 00:15:19.163 "name": "BaseBdev4", 00:15:19.163 "uuid": "5e8a6e6b-1183-448a-a371-2739c33b8946", 00:15:19.163 "is_configured": true, 00:15:19.163 "data_offset": 0, 00:15:19.163 "data_size": 65536 00:15:19.163 } 00:15:19.163 ] 00:15:19.163 }' 00:15:19.163 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.163 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.423 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.423 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:19.423 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.423 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.682 [2024-11-27 14:14:50.421335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.682 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.683 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.683 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.683 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.683 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.683 "name": "Existed_Raid", 00:15:19.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.683 "strip_size_kb": 64, 00:15:19.683 "state": "configuring", 00:15:19.683 "raid_level": "concat", 00:15:19.683 "superblock": false, 00:15:19.683 "num_base_bdevs": 4, 00:15:19.683 "num_base_bdevs_discovered": 2, 00:15:19.683 "num_base_bdevs_operational": 4, 00:15:19.683 "base_bdevs_list": [ 00:15:19.683 { 00:15:19.683 "name": null, 00:15:19.683 "uuid": "ba34b20c-322e-4be2-831c-59e1a84ea59e", 00:15:19.683 "is_configured": false, 00:15:19.683 "data_offset": 0, 00:15:19.683 "data_size": 65536 00:15:19.683 }, 00:15:19.683 { 00:15:19.683 "name": null, 00:15:19.683 "uuid": "4c276f09-511b-4572-83a6-416666454f00", 00:15:19.683 "is_configured": false, 00:15:19.683 "data_offset": 0, 00:15:19.683 "data_size": 65536 00:15:19.683 }, 00:15:19.683 { 00:15:19.683 "name": "BaseBdev3", 00:15:19.683 "uuid": "a0549a28-7d38-4662-9cac-977558a44a2c", 00:15:19.683 "is_configured": true, 00:15:19.683 "data_offset": 0, 00:15:19.683 "data_size": 65536 00:15:19.683 }, 00:15:19.683 { 00:15:19.683 "name": "BaseBdev4", 00:15:19.683 "uuid": "5e8a6e6b-1183-448a-a371-2739c33b8946", 00:15:19.683 "is_configured": true, 00:15:19.683 "data_offset": 0, 00:15:19.683 "data_size": 65536 00:15:19.683 } 00:15:19.683 ] 00:15:19.683 }' 00:15:19.683 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.683 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.255 [2024-11-27 14:14:51.058377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.255 "name": "Existed_Raid", 00:15:20.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.255 "strip_size_kb": 64, 00:15:20.255 "state": "configuring", 00:15:20.255 "raid_level": "concat", 00:15:20.255 "superblock": false, 00:15:20.255 "num_base_bdevs": 4, 00:15:20.255 "num_base_bdevs_discovered": 3, 00:15:20.255 "num_base_bdevs_operational": 4, 00:15:20.255 "base_bdevs_list": [ 00:15:20.255 { 00:15:20.255 "name": null, 00:15:20.255 "uuid": "ba34b20c-322e-4be2-831c-59e1a84ea59e", 00:15:20.255 "is_configured": false, 00:15:20.255 "data_offset": 0, 00:15:20.255 "data_size": 65536 00:15:20.255 }, 00:15:20.255 { 00:15:20.255 "name": "BaseBdev2", 00:15:20.255 "uuid": "4c276f09-511b-4572-83a6-416666454f00", 00:15:20.255 "is_configured": true, 00:15:20.255 "data_offset": 0, 00:15:20.255 "data_size": 65536 00:15:20.255 }, 00:15:20.255 { 00:15:20.255 "name": "BaseBdev3", 00:15:20.255 "uuid": "a0549a28-7d38-4662-9cac-977558a44a2c", 00:15:20.255 "is_configured": true, 00:15:20.255 "data_offset": 0, 00:15:20.255 "data_size": 65536 00:15:20.255 }, 00:15:20.255 { 00:15:20.255 "name": "BaseBdev4", 00:15:20.255 "uuid": "5e8a6e6b-1183-448a-a371-2739c33b8946", 00:15:20.255 "is_configured": true, 00:15:20.255 "data_offset": 0, 00:15:20.255 "data_size": 65536 00:15:20.255 } 00:15:20.255 ] 00:15:20.255 }' 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.255 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ba34b20c-322e-4be2-831c-59e1a84ea59e 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.823 [2024-11-27 14:14:51.607685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:20.823 [2024-11-27 14:14:51.607795] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:20.823 [2024-11-27 14:14:51.607820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:20.823 [2024-11-27 14:14:51.608157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:20.823 [2024-11-27 14:14:51.608369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:20.823 [2024-11-27 14:14:51.608412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:20.823 [2024-11-27 14:14:51.608701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.823 NewBaseBdev 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:20.823 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.824 [ 00:15:20.824 { 00:15:20.824 "name": "NewBaseBdev", 00:15:20.824 "aliases": [ 00:15:20.824 "ba34b20c-322e-4be2-831c-59e1a84ea59e" 00:15:20.824 ], 00:15:20.824 "product_name": "Malloc disk", 00:15:20.824 "block_size": 512, 00:15:20.824 "num_blocks": 65536, 00:15:20.824 "uuid": "ba34b20c-322e-4be2-831c-59e1a84ea59e", 00:15:20.824 "assigned_rate_limits": { 00:15:20.824 "rw_ios_per_sec": 0, 00:15:20.824 "rw_mbytes_per_sec": 0, 00:15:20.824 "r_mbytes_per_sec": 0, 00:15:20.824 "w_mbytes_per_sec": 0 00:15:20.824 }, 00:15:20.824 "claimed": true, 00:15:20.824 "claim_type": "exclusive_write", 00:15:20.824 "zoned": false, 00:15:20.824 "supported_io_types": { 00:15:20.824 "read": true, 00:15:20.824 "write": true, 00:15:20.824 "unmap": true, 00:15:20.824 "flush": true, 00:15:20.824 "reset": true, 00:15:20.824 "nvme_admin": false, 00:15:20.824 "nvme_io": false, 00:15:20.824 "nvme_io_md": false, 00:15:20.824 "write_zeroes": true, 00:15:20.824 "zcopy": true, 00:15:20.824 "get_zone_info": false, 00:15:20.824 "zone_management": false, 00:15:20.824 "zone_append": false, 00:15:20.824 "compare": false, 00:15:20.824 "compare_and_write": false, 00:15:20.824 "abort": true, 00:15:20.824 "seek_hole": false, 00:15:20.824 "seek_data": false, 00:15:20.824 "copy": true, 00:15:20.824 "nvme_iov_md": false 00:15:20.824 }, 00:15:20.824 "memory_domains": [ 00:15:20.824 { 00:15:20.824 "dma_device_id": "system", 00:15:20.824 "dma_device_type": 1 00:15:20.824 }, 00:15:20.824 { 00:15:20.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.824 "dma_device_type": 2 00:15:20.824 } 00:15:20.824 ], 00:15:20.824 "driver_specific": {} 00:15:20.824 } 00:15:20.824 ] 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.824 "name": "Existed_Raid", 00:15:20.824 "uuid": "43e8bbe5-1f7e-4331-b217-30b76fb7342c", 00:15:20.824 "strip_size_kb": 64, 00:15:20.824 "state": "online", 00:15:20.824 "raid_level": "concat", 00:15:20.824 "superblock": false, 00:15:20.824 "num_base_bdevs": 4, 00:15:20.824 "num_base_bdevs_discovered": 4, 00:15:20.824 "num_base_bdevs_operational": 4, 00:15:20.824 "base_bdevs_list": [ 00:15:20.824 { 00:15:20.824 "name": "NewBaseBdev", 00:15:20.824 "uuid": "ba34b20c-322e-4be2-831c-59e1a84ea59e", 00:15:20.824 "is_configured": true, 00:15:20.824 "data_offset": 0, 00:15:20.824 "data_size": 65536 00:15:20.824 }, 00:15:20.824 { 00:15:20.824 "name": "BaseBdev2", 00:15:20.824 "uuid": "4c276f09-511b-4572-83a6-416666454f00", 00:15:20.824 "is_configured": true, 00:15:20.824 "data_offset": 0, 00:15:20.824 "data_size": 65536 00:15:20.824 }, 00:15:20.824 { 00:15:20.824 "name": "BaseBdev3", 00:15:20.824 "uuid": "a0549a28-7d38-4662-9cac-977558a44a2c", 00:15:20.824 "is_configured": true, 00:15:20.824 "data_offset": 0, 00:15:20.824 "data_size": 65536 00:15:20.824 }, 00:15:20.824 { 00:15:20.824 "name": "BaseBdev4", 00:15:20.824 "uuid": "5e8a6e6b-1183-448a-a371-2739c33b8946", 00:15:20.824 "is_configured": true, 00:15:20.824 "data_offset": 0, 00:15:20.824 "data_size": 65536 00:15:20.824 } 00:15:20.824 ] 00:15:20.824 }' 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.824 14:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.392 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:21.392 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:21.392 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.393 [2024-11-27 14:14:52.099350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:21.393 "name": "Existed_Raid", 00:15:21.393 "aliases": [ 00:15:21.393 "43e8bbe5-1f7e-4331-b217-30b76fb7342c" 00:15:21.393 ], 00:15:21.393 "product_name": "Raid Volume", 00:15:21.393 "block_size": 512, 00:15:21.393 "num_blocks": 262144, 00:15:21.393 "uuid": "43e8bbe5-1f7e-4331-b217-30b76fb7342c", 00:15:21.393 "assigned_rate_limits": { 00:15:21.393 "rw_ios_per_sec": 0, 00:15:21.393 "rw_mbytes_per_sec": 0, 00:15:21.393 "r_mbytes_per_sec": 0, 00:15:21.393 "w_mbytes_per_sec": 0 00:15:21.393 }, 00:15:21.393 "claimed": false, 00:15:21.393 "zoned": false, 00:15:21.393 "supported_io_types": { 00:15:21.393 "read": true, 00:15:21.393 "write": true, 00:15:21.393 "unmap": true, 00:15:21.393 "flush": true, 00:15:21.393 "reset": true, 00:15:21.393 "nvme_admin": false, 00:15:21.393 "nvme_io": false, 00:15:21.393 "nvme_io_md": false, 00:15:21.393 "write_zeroes": true, 00:15:21.393 "zcopy": false, 00:15:21.393 "get_zone_info": false, 00:15:21.393 "zone_management": false, 00:15:21.393 "zone_append": false, 00:15:21.393 "compare": false, 00:15:21.393 "compare_and_write": false, 00:15:21.393 "abort": false, 00:15:21.393 "seek_hole": false, 00:15:21.393 "seek_data": false, 00:15:21.393 "copy": false, 00:15:21.393 "nvme_iov_md": false 00:15:21.393 }, 00:15:21.393 "memory_domains": [ 00:15:21.393 { 00:15:21.393 "dma_device_id": "system", 00:15:21.393 "dma_device_type": 1 00:15:21.393 }, 00:15:21.393 { 00:15:21.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.393 "dma_device_type": 2 00:15:21.393 }, 00:15:21.393 { 00:15:21.393 "dma_device_id": "system", 00:15:21.393 "dma_device_type": 1 00:15:21.393 }, 00:15:21.393 { 00:15:21.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.393 "dma_device_type": 2 00:15:21.393 }, 00:15:21.393 { 00:15:21.393 "dma_device_id": "system", 00:15:21.393 "dma_device_type": 1 00:15:21.393 }, 00:15:21.393 { 00:15:21.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.393 "dma_device_type": 2 00:15:21.393 }, 00:15:21.393 { 00:15:21.393 "dma_device_id": "system", 00:15:21.393 "dma_device_type": 1 00:15:21.393 }, 00:15:21.393 { 00:15:21.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.393 "dma_device_type": 2 00:15:21.393 } 00:15:21.393 ], 00:15:21.393 "driver_specific": { 00:15:21.393 "raid": { 00:15:21.393 "uuid": "43e8bbe5-1f7e-4331-b217-30b76fb7342c", 00:15:21.393 "strip_size_kb": 64, 00:15:21.393 "state": "online", 00:15:21.393 "raid_level": "concat", 00:15:21.393 "superblock": false, 00:15:21.393 "num_base_bdevs": 4, 00:15:21.393 "num_base_bdevs_discovered": 4, 00:15:21.393 "num_base_bdevs_operational": 4, 00:15:21.393 "base_bdevs_list": [ 00:15:21.393 { 00:15:21.393 "name": "NewBaseBdev", 00:15:21.393 "uuid": "ba34b20c-322e-4be2-831c-59e1a84ea59e", 00:15:21.393 "is_configured": true, 00:15:21.393 "data_offset": 0, 00:15:21.393 "data_size": 65536 00:15:21.393 }, 00:15:21.393 { 00:15:21.393 "name": "BaseBdev2", 00:15:21.393 "uuid": "4c276f09-511b-4572-83a6-416666454f00", 00:15:21.393 "is_configured": true, 00:15:21.393 "data_offset": 0, 00:15:21.393 "data_size": 65536 00:15:21.393 }, 00:15:21.393 { 00:15:21.393 "name": "BaseBdev3", 00:15:21.393 "uuid": "a0549a28-7d38-4662-9cac-977558a44a2c", 00:15:21.393 "is_configured": true, 00:15:21.393 "data_offset": 0, 00:15:21.393 "data_size": 65536 00:15:21.393 }, 00:15:21.393 { 00:15:21.393 "name": "BaseBdev4", 00:15:21.393 "uuid": "5e8a6e6b-1183-448a-a371-2739c33b8946", 00:15:21.393 "is_configured": true, 00:15:21.393 "data_offset": 0, 00:15:21.393 "data_size": 65536 00:15:21.393 } 00:15:21.393 ] 00:15:21.393 } 00:15:21.393 } 00:15:21.393 }' 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:21.393 BaseBdev2 00:15:21.393 BaseBdev3 00:15:21.393 BaseBdev4' 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.393 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.653 [2024-11-27 14:14:52.422389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.653 [2024-11-27 14:14:52.422486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.653 [2024-11-27 14:14:52.422603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.653 [2024-11-27 14:14:52.422698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.653 [2024-11-27 14:14:52.422745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71511 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71511 ']' 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71511 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71511 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.653 killing process with pid 71511 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71511' 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71511 00:15:21.653 [2024-11-27 14:14:52.473388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.653 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71511 00:15:22.220 [2024-11-27 14:14:52.874212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:23.158 00:15:23.158 real 0m11.936s 00:15:23.158 user 0m18.956s 00:15:23.158 sys 0m2.064s 00:15:23.158 ************************************ 00:15:23.158 END TEST raid_state_function_test 00:15:23.158 ************************************ 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.158 14:14:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:15:23.158 14:14:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:23.158 14:14:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.158 14:14:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.158 ************************************ 00:15:23.158 START TEST raid_state_function_test_sb 00:15:23.158 ************************************ 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72189 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72189' 00:15:23.158 Process raid pid: 72189 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72189 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72189 ']' 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.158 14:14:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:23.417 [2024-11-27 14:14:54.179468] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:23.417 [2024-11-27 14:14:54.179751] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.679 [2024-11-27 14:14:54.377234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.679 [2024-11-27 14:14:54.498438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.939 [2024-11-27 14:14:54.717436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.939 [2024-11-27 14:14:54.717582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.198 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.198 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:24.198 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:24.198 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.198 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.198 [2024-11-27 14:14:55.028951] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.198 [2024-11-27 14:14:55.029069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.199 [2024-11-27 14:14:55.029140] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.199 [2024-11-27 14:14:55.029172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.199 [2024-11-27 14:14:55.029203] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.199 [2024-11-27 14:14:55.029231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.199 [2024-11-27 14:14:55.029272] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.199 [2024-11-27 14:14:55.029330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.199 "name": "Existed_Raid", 00:15:24.199 "uuid": "77ff48bc-8cf7-4ec4-be4e-1cce6c7b1cc3", 00:15:24.199 "strip_size_kb": 64, 00:15:24.199 "state": "configuring", 00:15:24.199 "raid_level": "concat", 00:15:24.199 "superblock": true, 00:15:24.199 "num_base_bdevs": 4, 00:15:24.199 "num_base_bdevs_discovered": 0, 00:15:24.199 "num_base_bdevs_operational": 4, 00:15:24.199 "base_bdevs_list": [ 00:15:24.199 { 00:15:24.199 "name": "BaseBdev1", 00:15:24.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.199 "is_configured": false, 00:15:24.199 "data_offset": 0, 00:15:24.199 "data_size": 0 00:15:24.199 }, 00:15:24.199 { 00:15:24.199 "name": "BaseBdev2", 00:15:24.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.199 "is_configured": false, 00:15:24.199 "data_offset": 0, 00:15:24.199 "data_size": 0 00:15:24.199 }, 00:15:24.199 { 00:15:24.199 "name": "BaseBdev3", 00:15:24.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.199 "is_configured": false, 00:15:24.199 "data_offset": 0, 00:15:24.199 "data_size": 0 00:15:24.199 }, 00:15:24.199 { 00:15:24.199 "name": "BaseBdev4", 00:15:24.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.199 "is_configured": false, 00:15:24.199 "data_offset": 0, 00:15:24.199 "data_size": 0 00:15:24.199 } 00:15:24.199 ] 00:15:24.199 }' 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.199 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.769 [2024-11-27 14:14:55.508152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.769 [2024-11-27 14:14:55.508253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.769 [2024-11-27 14:14:55.520165] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.769 [2024-11-27 14:14:55.520214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.769 [2024-11-27 14:14:55.520225] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.769 [2024-11-27 14:14:55.520236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.769 [2024-11-27 14:14:55.520243] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.769 [2024-11-27 14:14:55.520254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.769 [2024-11-27 14:14:55.520261] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.769 [2024-11-27 14:14:55.520270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.769 [2024-11-27 14:14:55.570061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.769 BaseBdev1 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.769 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.769 [ 00:15:24.769 { 00:15:24.769 "name": "BaseBdev1", 00:15:24.769 "aliases": [ 00:15:24.769 "2f7e9e51-1414-42ef-a9b3-598e00721051" 00:15:24.769 ], 00:15:24.769 "product_name": "Malloc disk", 00:15:24.769 "block_size": 512, 00:15:24.769 "num_blocks": 65536, 00:15:24.769 "uuid": "2f7e9e51-1414-42ef-a9b3-598e00721051", 00:15:24.769 "assigned_rate_limits": { 00:15:24.769 "rw_ios_per_sec": 0, 00:15:24.769 "rw_mbytes_per_sec": 0, 00:15:24.769 "r_mbytes_per_sec": 0, 00:15:24.769 "w_mbytes_per_sec": 0 00:15:24.770 }, 00:15:24.770 "claimed": true, 00:15:24.770 "claim_type": "exclusive_write", 00:15:24.770 "zoned": false, 00:15:24.770 "supported_io_types": { 00:15:24.770 "read": true, 00:15:24.770 "write": true, 00:15:24.770 "unmap": true, 00:15:24.770 "flush": true, 00:15:24.770 "reset": true, 00:15:24.770 "nvme_admin": false, 00:15:24.770 "nvme_io": false, 00:15:24.770 "nvme_io_md": false, 00:15:24.770 "write_zeroes": true, 00:15:24.770 "zcopy": true, 00:15:24.770 "get_zone_info": false, 00:15:24.770 "zone_management": false, 00:15:24.770 "zone_append": false, 00:15:24.770 "compare": false, 00:15:24.770 "compare_and_write": false, 00:15:24.770 "abort": true, 00:15:24.770 "seek_hole": false, 00:15:24.770 "seek_data": false, 00:15:24.770 "copy": true, 00:15:24.770 "nvme_iov_md": false 00:15:24.770 }, 00:15:24.770 "memory_domains": [ 00:15:24.770 { 00:15:24.770 "dma_device_id": "system", 00:15:24.770 "dma_device_type": 1 00:15:24.770 }, 00:15:24.770 { 00:15:24.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.770 "dma_device_type": 2 00:15:24.770 } 00:15:24.770 ], 00:15:24.770 "driver_specific": {} 00:15:24.770 } 00:15:24.770 ] 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.770 "name": "Existed_Raid", 00:15:24.770 "uuid": "273cd856-97e2-4741-88c6-bbc7ea8c1e37", 00:15:24.770 "strip_size_kb": 64, 00:15:24.770 "state": "configuring", 00:15:24.770 "raid_level": "concat", 00:15:24.770 "superblock": true, 00:15:24.770 "num_base_bdevs": 4, 00:15:24.770 "num_base_bdevs_discovered": 1, 00:15:24.770 "num_base_bdevs_operational": 4, 00:15:24.770 "base_bdevs_list": [ 00:15:24.770 { 00:15:24.770 "name": "BaseBdev1", 00:15:24.770 "uuid": "2f7e9e51-1414-42ef-a9b3-598e00721051", 00:15:24.770 "is_configured": true, 00:15:24.770 "data_offset": 2048, 00:15:24.770 "data_size": 63488 00:15:24.770 }, 00:15:24.770 { 00:15:24.770 "name": "BaseBdev2", 00:15:24.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.770 "is_configured": false, 00:15:24.770 "data_offset": 0, 00:15:24.770 "data_size": 0 00:15:24.770 }, 00:15:24.770 { 00:15:24.770 "name": "BaseBdev3", 00:15:24.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.770 "is_configured": false, 00:15:24.770 "data_offset": 0, 00:15:24.770 "data_size": 0 00:15:24.770 }, 00:15:24.770 { 00:15:24.770 "name": "BaseBdev4", 00:15:24.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.770 "is_configured": false, 00:15:24.770 "data_offset": 0, 00:15:24.770 "data_size": 0 00:15:24.770 } 00:15:24.770 ] 00:15:24.770 }' 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.770 14:14:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.338 [2024-11-27 14:14:56.041313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.338 [2024-11-27 14:14:56.041369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.338 [2024-11-27 14:14:56.053358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.338 [2024-11-27 14:14:56.055365] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.338 [2024-11-27 14:14:56.055410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.338 [2024-11-27 14:14:56.055421] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.338 [2024-11-27 14:14:56.055432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.338 [2024-11-27 14:14:56.055439] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:25.338 [2024-11-27 14:14:56.055447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.338 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.338 "name": "Existed_Raid", 00:15:25.338 "uuid": "7b59d867-4e90-4812-b860-3c41b7c4f83a", 00:15:25.338 "strip_size_kb": 64, 00:15:25.338 "state": "configuring", 00:15:25.338 "raid_level": "concat", 00:15:25.338 "superblock": true, 00:15:25.338 "num_base_bdevs": 4, 00:15:25.338 "num_base_bdevs_discovered": 1, 00:15:25.338 "num_base_bdevs_operational": 4, 00:15:25.339 "base_bdevs_list": [ 00:15:25.339 { 00:15:25.339 "name": "BaseBdev1", 00:15:25.339 "uuid": "2f7e9e51-1414-42ef-a9b3-598e00721051", 00:15:25.339 "is_configured": true, 00:15:25.339 "data_offset": 2048, 00:15:25.339 "data_size": 63488 00:15:25.339 }, 00:15:25.339 { 00:15:25.339 "name": "BaseBdev2", 00:15:25.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.339 "is_configured": false, 00:15:25.339 "data_offset": 0, 00:15:25.339 "data_size": 0 00:15:25.339 }, 00:15:25.339 { 00:15:25.339 "name": "BaseBdev3", 00:15:25.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.339 "is_configured": false, 00:15:25.339 "data_offset": 0, 00:15:25.339 "data_size": 0 00:15:25.339 }, 00:15:25.339 { 00:15:25.339 "name": "BaseBdev4", 00:15:25.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.339 "is_configured": false, 00:15:25.339 "data_offset": 0, 00:15:25.339 "data_size": 0 00:15:25.339 } 00:15:25.339 ] 00:15:25.339 }' 00:15:25.339 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.339 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.598 [2024-11-27 14:14:56.527443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:25.598 BaseBdev2 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.598 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.598 [ 00:15:25.598 { 00:15:25.598 "name": "BaseBdev2", 00:15:25.598 "aliases": [ 00:15:25.598 "ce90fc0d-0609-46a9-a88e-329cc5e8d9c1" 00:15:25.598 ], 00:15:25.598 "product_name": "Malloc disk", 00:15:25.598 "block_size": 512, 00:15:25.598 "num_blocks": 65536, 00:15:25.598 "uuid": "ce90fc0d-0609-46a9-a88e-329cc5e8d9c1", 00:15:25.598 "assigned_rate_limits": { 00:15:25.598 "rw_ios_per_sec": 0, 00:15:25.598 "rw_mbytes_per_sec": 0, 00:15:25.598 "r_mbytes_per_sec": 0, 00:15:25.598 "w_mbytes_per_sec": 0 00:15:25.598 }, 00:15:25.598 "claimed": true, 00:15:25.598 "claim_type": "exclusive_write", 00:15:25.598 "zoned": false, 00:15:25.598 "supported_io_types": { 00:15:25.598 "read": true, 00:15:25.598 "write": true, 00:15:25.598 "unmap": true, 00:15:25.598 "flush": true, 00:15:25.598 "reset": true, 00:15:25.598 "nvme_admin": false, 00:15:25.598 "nvme_io": false, 00:15:25.598 "nvme_io_md": false, 00:15:25.598 "write_zeroes": true, 00:15:25.598 "zcopy": true, 00:15:25.598 "get_zone_info": false, 00:15:25.878 "zone_management": false, 00:15:25.878 "zone_append": false, 00:15:25.878 "compare": false, 00:15:25.878 "compare_and_write": false, 00:15:25.878 "abort": true, 00:15:25.878 "seek_hole": false, 00:15:25.878 "seek_data": false, 00:15:25.878 "copy": true, 00:15:25.878 "nvme_iov_md": false 00:15:25.878 }, 00:15:25.878 "memory_domains": [ 00:15:25.878 { 00:15:25.878 "dma_device_id": "system", 00:15:25.878 "dma_device_type": 1 00:15:25.878 }, 00:15:25.878 { 00:15:25.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.878 "dma_device_type": 2 00:15:25.878 } 00:15:25.878 ], 00:15:25.878 "driver_specific": {} 00:15:25.878 } 00:15:25.878 ] 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.878 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.878 "name": "Existed_Raid", 00:15:25.878 "uuid": "7b59d867-4e90-4812-b860-3c41b7c4f83a", 00:15:25.878 "strip_size_kb": 64, 00:15:25.878 "state": "configuring", 00:15:25.878 "raid_level": "concat", 00:15:25.878 "superblock": true, 00:15:25.878 "num_base_bdevs": 4, 00:15:25.878 "num_base_bdevs_discovered": 2, 00:15:25.878 "num_base_bdevs_operational": 4, 00:15:25.878 "base_bdevs_list": [ 00:15:25.878 { 00:15:25.878 "name": "BaseBdev1", 00:15:25.878 "uuid": "2f7e9e51-1414-42ef-a9b3-598e00721051", 00:15:25.878 "is_configured": true, 00:15:25.878 "data_offset": 2048, 00:15:25.878 "data_size": 63488 00:15:25.879 }, 00:15:25.879 { 00:15:25.879 "name": "BaseBdev2", 00:15:25.879 "uuid": "ce90fc0d-0609-46a9-a88e-329cc5e8d9c1", 00:15:25.879 "is_configured": true, 00:15:25.879 "data_offset": 2048, 00:15:25.879 "data_size": 63488 00:15:25.879 }, 00:15:25.879 { 00:15:25.879 "name": "BaseBdev3", 00:15:25.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.879 "is_configured": false, 00:15:25.879 "data_offset": 0, 00:15:25.879 "data_size": 0 00:15:25.879 }, 00:15:25.879 { 00:15:25.879 "name": "BaseBdev4", 00:15:25.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.879 "is_configured": false, 00:15:25.879 "data_offset": 0, 00:15:25.879 "data_size": 0 00:15:25.879 } 00:15:25.879 ] 00:15:25.879 }' 00:15:25.879 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.879 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.139 [2024-11-27 14:14:56.964698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.139 BaseBdev3 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.139 14:14:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.139 [ 00:15:26.139 { 00:15:26.139 "name": "BaseBdev3", 00:15:26.139 "aliases": [ 00:15:26.139 "a689c472-2417-4df6-8a41-fa75fc7deb49" 00:15:26.139 ], 00:15:26.139 "product_name": "Malloc disk", 00:15:26.139 "block_size": 512, 00:15:26.139 "num_blocks": 65536, 00:15:26.139 "uuid": "a689c472-2417-4df6-8a41-fa75fc7deb49", 00:15:26.139 "assigned_rate_limits": { 00:15:26.139 "rw_ios_per_sec": 0, 00:15:26.139 "rw_mbytes_per_sec": 0, 00:15:26.139 "r_mbytes_per_sec": 0, 00:15:26.139 "w_mbytes_per_sec": 0 00:15:26.139 }, 00:15:26.139 "claimed": true, 00:15:26.139 "claim_type": "exclusive_write", 00:15:26.139 "zoned": false, 00:15:26.139 "supported_io_types": { 00:15:26.139 "read": true, 00:15:26.139 "write": true, 00:15:26.139 "unmap": true, 00:15:26.139 "flush": true, 00:15:26.139 "reset": true, 00:15:26.139 "nvme_admin": false, 00:15:26.139 "nvme_io": false, 00:15:26.139 "nvme_io_md": false, 00:15:26.139 "write_zeroes": true, 00:15:26.139 "zcopy": true, 00:15:26.139 "get_zone_info": false, 00:15:26.139 "zone_management": false, 00:15:26.139 "zone_append": false, 00:15:26.139 "compare": false, 00:15:26.139 "compare_and_write": false, 00:15:26.139 "abort": true, 00:15:26.139 "seek_hole": false, 00:15:26.139 "seek_data": false, 00:15:26.139 "copy": true, 00:15:26.139 "nvme_iov_md": false 00:15:26.139 }, 00:15:26.139 "memory_domains": [ 00:15:26.139 { 00:15:26.139 "dma_device_id": "system", 00:15:26.139 "dma_device_type": 1 00:15:26.139 }, 00:15:26.139 { 00:15:26.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.139 "dma_device_type": 2 00:15:26.139 } 00:15:26.139 ], 00:15:26.139 "driver_specific": {} 00:15:26.139 } 00:15:26.139 ] 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.139 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.139 "name": "Existed_Raid", 00:15:26.139 "uuid": "7b59d867-4e90-4812-b860-3c41b7c4f83a", 00:15:26.139 "strip_size_kb": 64, 00:15:26.139 "state": "configuring", 00:15:26.139 "raid_level": "concat", 00:15:26.139 "superblock": true, 00:15:26.139 "num_base_bdevs": 4, 00:15:26.139 "num_base_bdevs_discovered": 3, 00:15:26.139 "num_base_bdevs_operational": 4, 00:15:26.139 "base_bdevs_list": [ 00:15:26.139 { 00:15:26.139 "name": "BaseBdev1", 00:15:26.139 "uuid": "2f7e9e51-1414-42ef-a9b3-598e00721051", 00:15:26.139 "is_configured": true, 00:15:26.139 "data_offset": 2048, 00:15:26.139 "data_size": 63488 00:15:26.139 }, 00:15:26.139 { 00:15:26.139 "name": "BaseBdev2", 00:15:26.139 "uuid": "ce90fc0d-0609-46a9-a88e-329cc5e8d9c1", 00:15:26.139 "is_configured": true, 00:15:26.139 "data_offset": 2048, 00:15:26.139 "data_size": 63488 00:15:26.139 }, 00:15:26.139 { 00:15:26.139 "name": "BaseBdev3", 00:15:26.139 "uuid": "a689c472-2417-4df6-8a41-fa75fc7deb49", 00:15:26.139 "is_configured": true, 00:15:26.139 "data_offset": 2048, 00:15:26.139 "data_size": 63488 00:15:26.139 }, 00:15:26.139 { 00:15:26.139 "name": "BaseBdev4", 00:15:26.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.139 "is_configured": false, 00:15:26.139 "data_offset": 0, 00:15:26.139 "data_size": 0 00:15:26.140 } 00:15:26.140 ] 00:15:26.140 }' 00:15:26.140 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.140 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.709 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:26.709 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.709 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.709 [2024-11-27 14:14:57.484326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:26.709 [2024-11-27 14:14:57.484616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:26.709 [2024-11-27 14:14:57.484633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:26.709 [2024-11-27 14:14:57.484898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:26.709 BaseBdev4 00:15:26.709 [2024-11-27 14:14:57.485050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:26.710 [2024-11-27 14:14:57.485063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:26.710 [2024-11-27 14:14:57.485214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.710 [ 00:15:26.710 { 00:15:26.710 "name": "BaseBdev4", 00:15:26.710 "aliases": [ 00:15:26.710 "664bfe0a-7350-4991-9e54-4eb8d041e050" 00:15:26.710 ], 00:15:26.710 "product_name": "Malloc disk", 00:15:26.710 "block_size": 512, 00:15:26.710 "num_blocks": 65536, 00:15:26.710 "uuid": "664bfe0a-7350-4991-9e54-4eb8d041e050", 00:15:26.710 "assigned_rate_limits": { 00:15:26.710 "rw_ios_per_sec": 0, 00:15:26.710 "rw_mbytes_per_sec": 0, 00:15:26.710 "r_mbytes_per_sec": 0, 00:15:26.710 "w_mbytes_per_sec": 0 00:15:26.710 }, 00:15:26.710 "claimed": true, 00:15:26.710 "claim_type": "exclusive_write", 00:15:26.710 "zoned": false, 00:15:26.710 "supported_io_types": { 00:15:26.710 "read": true, 00:15:26.710 "write": true, 00:15:26.710 "unmap": true, 00:15:26.710 "flush": true, 00:15:26.710 "reset": true, 00:15:26.710 "nvme_admin": false, 00:15:26.710 "nvme_io": false, 00:15:26.710 "nvme_io_md": false, 00:15:26.710 "write_zeroes": true, 00:15:26.710 "zcopy": true, 00:15:26.710 "get_zone_info": false, 00:15:26.710 "zone_management": false, 00:15:26.710 "zone_append": false, 00:15:26.710 "compare": false, 00:15:26.710 "compare_and_write": false, 00:15:26.710 "abort": true, 00:15:26.710 "seek_hole": false, 00:15:26.710 "seek_data": false, 00:15:26.710 "copy": true, 00:15:26.710 "nvme_iov_md": false 00:15:26.710 }, 00:15:26.710 "memory_domains": [ 00:15:26.710 { 00:15:26.710 "dma_device_id": "system", 00:15:26.710 "dma_device_type": 1 00:15:26.710 }, 00:15:26.710 { 00:15:26.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.710 "dma_device_type": 2 00:15:26.710 } 00:15:26.710 ], 00:15:26.710 "driver_specific": {} 00:15:26.710 } 00:15:26.710 ] 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.710 "name": "Existed_Raid", 00:15:26.710 "uuid": "7b59d867-4e90-4812-b860-3c41b7c4f83a", 00:15:26.710 "strip_size_kb": 64, 00:15:26.710 "state": "online", 00:15:26.710 "raid_level": "concat", 00:15:26.710 "superblock": true, 00:15:26.710 "num_base_bdevs": 4, 00:15:26.710 "num_base_bdevs_discovered": 4, 00:15:26.710 "num_base_bdevs_operational": 4, 00:15:26.710 "base_bdevs_list": [ 00:15:26.710 { 00:15:26.710 "name": "BaseBdev1", 00:15:26.710 "uuid": "2f7e9e51-1414-42ef-a9b3-598e00721051", 00:15:26.710 "is_configured": true, 00:15:26.710 "data_offset": 2048, 00:15:26.710 "data_size": 63488 00:15:26.710 }, 00:15:26.710 { 00:15:26.710 "name": "BaseBdev2", 00:15:26.710 "uuid": "ce90fc0d-0609-46a9-a88e-329cc5e8d9c1", 00:15:26.710 "is_configured": true, 00:15:26.710 "data_offset": 2048, 00:15:26.710 "data_size": 63488 00:15:26.710 }, 00:15:26.710 { 00:15:26.710 "name": "BaseBdev3", 00:15:26.710 "uuid": "a689c472-2417-4df6-8a41-fa75fc7deb49", 00:15:26.710 "is_configured": true, 00:15:26.710 "data_offset": 2048, 00:15:26.710 "data_size": 63488 00:15:26.710 }, 00:15:26.710 { 00:15:26.710 "name": "BaseBdev4", 00:15:26.710 "uuid": "664bfe0a-7350-4991-9e54-4eb8d041e050", 00:15:26.710 "is_configured": true, 00:15:26.710 "data_offset": 2048, 00:15:26.710 "data_size": 63488 00:15:26.710 } 00:15:26.710 ] 00:15:26.710 }' 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.710 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:27.281 [2024-11-27 14:14:57.959994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.281 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:27.281 "name": "Existed_Raid", 00:15:27.281 "aliases": [ 00:15:27.281 "7b59d867-4e90-4812-b860-3c41b7c4f83a" 00:15:27.281 ], 00:15:27.281 "product_name": "Raid Volume", 00:15:27.281 "block_size": 512, 00:15:27.281 "num_blocks": 253952, 00:15:27.281 "uuid": "7b59d867-4e90-4812-b860-3c41b7c4f83a", 00:15:27.281 "assigned_rate_limits": { 00:15:27.281 "rw_ios_per_sec": 0, 00:15:27.281 "rw_mbytes_per_sec": 0, 00:15:27.281 "r_mbytes_per_sec": 0, 00:15:27.281 "w_mbytes_per_sec": 0 00:15:27.281 }, 00:15:27.281 "claimed": false, 00:15:27.281 "zoned": false, 00:15:27.281 "supported_io_types": { 00:15:27.281 "read": true, 00:15:27.281 "write": true, 00:15:27.281 "unmap": true, 00:15:27.281 "flush": true, 00:15:27.281 "reset": true, 00:15:27.281 "nvme_admin": false, 00:15:27.281 "nvme_io": false, 00:15:27.281 "nvme_io_md": false, 00:15:27.281 "write_zeroes": true, 00:15:27.281 "zcopy": false, 00:15:27.281 "get_zone_info": false, 00:15:27.281 "zone_management": false, 00:15:27.281 "zone_append": false, 00:15:27.281 "compare": false, 00:15:27.281 "compare_and_write": false, 00:15:27.281 "abort": false, 00:15:27.281 "seek_hole": false, 00:15:27.281 "seek_data": false, 00:15:27.281 "copy": false, 00:15:27.281 "nvme_iov_md": false 00:15:27.281 }, 00:15:27.281 "memory_domains": [ 00:15:27.281 { 00:15:27.281 "dma_device_id": "system", 00:15:27.281 "dma_device_type": 1 00:15:27.281 }, 00:15:27.281 { 00:15:27.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.281 "dma_device_type": 2 00:15:27.281 }, 00:15:27.281 { 00:15:27.281 "dma_device_id": "system", 00:15:27.281 "dma_device_type": 1 00:15:27.281 }, 00:15:27.281 { 00:15:27.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.281 "dma_device_type": 2 00:15:27.281 }, 00:15:27.281 { 00:15:27.281 "dma_device_id": "system", 00:15:27.281 "dma_device_type": 1 00:15:27.281 }, 00:15:27.281 { 00:15:27.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.281 "dma_device_type": 2 00:15:27.281 }, 00:15:27.281 { 00:15:27.281 "dma_device_id": "system", 00:15:27.281 "dma_device_type": 1 00:15:27.281 }, 00:15:27.281 { 00:15:27.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.281 "dma_device_type": 2 00:15:27.281 } 00:15:27.281 ], 00:15:27.281 "driver_specific": { 00:15:27.281 "raid": { 00:15:27.281 "uuid": "7b59d867-4e90-4812-b860-3c41b7c4f83a", 00:15:27.281 "strip_size_kb": 64, 00:15:27.281 "state": "online", 00:15:27.281 "raid_level": "concat", 00:15:27.281 "superblock": true, 00:15:27.281 "num_base_bdevs": 4, 00:15:27.281 "num_base_bdevs_discovered": 4, 00:15:27.281 "num_base_bdevs_operational": 4, 00:15:27.281 "base_bdevs_list": [ 00:15:27.281 { 00:15:27.281 "name": "BaseBdev1", 00:15:27.281 "uuid": "2f7e9e51-1414-42ef-a9b3-598e00721051", 00:15:27.281 "is_configured": true, 00:15:27.281 "data_offset": 2048, 00:15:27.281 "data_size": 63488 00:15:27.281 }, 00:15:27.281 { 00:15:27.281 "name": "BaseBdev2", 00:15:27.281 "uuid": "ce90fc0d-0609-46a9-a88e-329cc5e8d9c1", 00:15:27.281 "is_configured": true, 00:15:27.281 "data_offset": 2048, 00:15:27.281 "data_size": 63488 00:15:27.281 }, 00:15:27.281 { 00:15:27.281 "name": "BaseBdev3", 00:15:27.281 "uuid": "a689c472-2417-4df6-8a41-fa75fc7deb49", 00:15:27.281 "is_configured": true, 00:15:27.281 "data_offset": 2048, 00:15:27.281 "data_size": 63488 00:15:27.281 }, 00:15:27.281 { 00:15:27.281 "name": "BaseBdev4", 00:15:27.281 "uuid": "664bfe0a-7350-4991-9e54-4eb8d041e050", 00:15:27.281 "is_configured": true, 00:15:27.281 "data_offset": 2048, 00:15:27.281 "data_size": 63488 00:15:27.282 } 00:15:27.282 ] 00:15:27.282 } 00:15:27.282 } 00:15:27.282 }' 00:15:27.282 14:14:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:27.282 BaseBdev2 00:15:27.282 BaseBdev3 00:15:27.282 BaseBdev4' 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.282 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.541 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.542 [2024-11-27 14:14:58.307085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.542 [2024-11-27 14:14:58.307125] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.542 [2024-11-27 14:14:58.307175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.542 "name": "Existed_Raid", 00:15:27.542 "uuid": "7b59d867-4e90-4812-b860-3c41b7c4f83a", 00:15:27.542 "strip_size_kb": 64, 00:15:27.542 "state": "offline", 00:15:27.542 "raid_level": "concat", 00:15:27.542 "superblock": true, 00:15:27.542 "num_base_bdevs": 4, 00:15:27.542 "num_base_bdevs_discovered": 3, 00:15:27.542 "num_base_bdevs_operational": 3, 00:15:27.542 "base_bdevs_list": [ 00:15:27.542 { 00:15:27.542 "name": null, 00:15:27.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.542 "is_configured": false, 00:15:27.542 "data_offset": 0, 00:15:27.542 "data_size": 63488 00:15:27.542 }, 00:15:27.542 { 00:15:27.542 "name": "BaseBdev2", 00:15:27.542 "uuid": "ce90fc0d-0609-46a9-a88e-329cc5e8d9c1", 00:15:27.542 "is_configured": true, 00:15:27.542 "data_offset": 2048, 00:15:27.542 "data_size": 63488 00:15:27.542 }, 00:15:27.542 { 00:15:27.542 "name": "BaseBdev3", 00:15:27.542 "uuid": "a689c472-2417-4df6-8a41-fa75fc7deb49", 00:15:27.542 "is_configured": true, 00:15:27.542 "data_offset": 2048, 00:15:27.542 "data_size": 63488 00:15:27.542 }, 00:15:27.542 { 00:15:27.542 "name": "BaseBdev4", 00:15:27.542 "uuid": "664bfe0a-7350-4991-9e54-4eb8d041e050", 00:15:27.542 "is_configured": true, 00:15:27.542 "data_offset": 2048, 00:15:27.542 "data_size": 63488 00:15:27.542 } 00:15:27.542 ] 00:15:27.542 }' 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.542 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.110 [2024-11-27 14:14:58.904445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.110 14:14:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.110 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.110 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.110 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.110 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.110 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.110 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.110 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.110 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.110 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:28.110 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.110 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.111 [2024-11-27 14:14:59.055312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.370 [2024-11-27 14:14:59.205004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:28.370 [2024-11-27 14:14:59.205124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.370 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.630 BaseBdev2 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.630 [ 00:15:28.630 { 00:15:28.630 "name": "BaseBdev2", 00:15:28.630 "aliases": [ 00:15:28.630 "3f8cf44e-ca1b-4e51-b920-564f180f337a" 00:15:28.630 ], 00:15:28.630 "product_name": "Malloc disk", 00:15:28.630 "block_size": 512, 00:15:28.630 "num_blocks": 65536, 00:15:28.630 "uuid": "3f8cf44e-ca1b-4e51-b920-564f180f337a", 00:15:28.630 "assigned_rate_limits": { 00:15:28.630 "rw_ios_per_sec": 0, 00:15:28.630 "rw_mbytes_per_sec": 0, 00:15:28.630 "r_mbytes_per_sec": 0, 00:15:28.630 "w_mbytes_per_sec": 0 00:15:28.630 }, 00:15:28.630 "claimed": false, 00:15:28.630 "zoned": false, 00:15:28.630 "supported_io_types": { 00:15:28.630 "read": true, 00:15:28.630 "write": true, 00:15:28.630 "unmap": true, 00:15:28.630 "flush": true, 00:15:28.630 "reset": true, 00:15:28.630 "nvme_admin": false, 00:15:28.630 "nvme_io": false, 00:15:28.630 "nvme_io_md": false, 00:15:28.630 "write_zeroes": true, 00:15:28.630 "zcopy": true, 00:15:28.630 "get_zone_info": false, 00:15:28.630 "zone_management": false, 00:15:28.630 "zone_append": false, 00:15:28.630 "compare": false, 00:15:28.630 "compare_and_write": false, 00:15:28.630 "abort": true, 00:15:28.630 "seek_hole": false, 00:15:28.630 "seek_data": false, 00:15:28.630 "copy": true, 00:15:28.630 "nvme_iov_md": false 00:15:28.630 }, 00:15:28.630 "memory_domains": [ 00:15:28.630 { 00:15:28.630 "dma_device_id": "system", 00:15:28.630 "dma_device_type": 1 00:15:28.630 }, 00:15:28.630 { 00:15:28.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.630 "dma_device_type": 2 00:15:28.630 } 00:15:28.630 ], 00:15:28.630 "driver_specific": {} 00:15:28.630 } 00:15:28.630 ] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.630 BaseBdev3 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.630 [ 00:15:28.630 { 00:15:28.630 "name": "BaseBdev3", 00:15:28.630 "aliases": [ 00:15:28.630 "815eaaec-c46b-4d36-9f92-a110dac7a3db" 00:15:28.630 ], 00:15:28.630 "product_name": "Malloc disk", 00:15:28.630 "block_size": 512, 00:15:28.630 "num_blocks": 65536, 00:15:28.630 "uuid": "815eaaec-c46b-4d36-9f92-a110dac7a3db", 00:15:28.630 "assigned_rate_limits": { 00:15:28.630 "rw_ios_per_sec": 0, 00:15:28.630 "rw_mbytes_per_sec": 0, 00:15:28.630 "r_mbytes_per_sec": 0, 00:15:28.630 "w_mbytes_per_sec": 0 00:15:28.630 }, 00:15:28.630 "claimed": false, 00:15:28.630 "zoned": false, 00:15:28.630 "supported_io_types": { 00:15:28.630 "read": true, 00:15:28.630 "write": true, 00:15:28.630 "unmap": true, 00:15:28.630 "flush": true, 00:15:28.630 "reset": true, 00:15:28.630 "nvme_admin": false, 00:15:28.630 "nvme_io": false, 00:15:28.630 "nvme_io_md": false, 00:15:28.630 "write_zeroes": true, 00:15:28.630 "zcopy": true, 00:15:28.630 "get_zone_info": false, 00:15:28.630 "zone_management": false, 00:15:28.630 "zone_append": false, 00:15:28.630 "compare": false, 00:15:28.630 "compare_and_write": false, 00:15:28.630 "abort": true, 00:15:28.630 "seek_hole": false, 00:15:28.630 "seek_data": false, 00:15:28.630 "copy": true, 00:15:28.630 "nvme_iov_md": false 00:15:28.630 }, 00:15:28.630 "memory_domains": [ 00:15:28.630 { 00:15:28.630 "dma_device_id": "system", 00:15:28.630 "dma_device_type": 1 00:15:28.630 }, 00:15:28.630 { 00:15:28.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.630 "dma_device_type": 2 00:15:28.630 } 00:15:28.630 ], 00:15:28.630 "driver_specific": {} 00:15:28.630 } 00:15:28.630 ] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.630 BaseBdev4 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:28.630 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:28.631 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:28.631 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:28.631 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:28.631 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:28.631 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:28.631 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.631 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.631 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.631 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:28.631 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.631 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.891 [ 00:15:28.891 { 00:15:28.891 "name": "BaseBdev4", 00:15:28.891 "aliases": [ 00:15:28.891 "24019fe2-8267-419b-963b-ee29ab1a8bd6" 00:15:28.891 ], 00:15:28.891 "product_name": "Malloc disk", 00:15:28.891 "block_size": 512, 00:15:28.891 "num_blocks": 65536, 00:15:28.891 "uuid": "24019fe2-8267-419b-963b-ee29ab1a8bd6", 00:15:28.891 "assigned_rate_limits": { 00:15:28.891 "rw_ios_per_sec": 0, 00:15:28.891 "rw_mbytes_per_sec": 0, 00:15:28.891 "r_mbytes_per_sec": 0, 00:15:28.891 "w_mbytes_per_sec": 0 00:15:28.891 }, 00:15:28.891 "claimed": false, 00:15:28.891 "zoned": false, 00:15:28.891 "supported_io_types": { 00:15:28.891 "read": true, 00:15:28.891 "write": true, 00:15:28.891 "unmap": true, 00:15:28.891 "flush": true, 00:15:28.891 "reset": true, 00:15:28.891 "nvme_admin": false, 00:15:28.891 "nvme_io": false, 00:15:28.891 "nvme_io_md": false, 00:15:28.891 "write_zeroes": true, 00:15:28.891 "zcopy": true, 00:15:28.891 "get_zone_info": false, 00:15:28.891 "zone_management": false, 00:15:28.891 "zone_append": false, 00:15:28.891 "compare": false, 00:15:28.891 "compare_and_write": false, 00:15:28.891 "abort": true, 00:15:28.891 "seek_hole": false, 00:15:28.891 "seek_data": false, 00:15:28.891 "copy": true, 00:15:28.891 "nvme_iov_md": false 00:15:28.891 }, 00:15:28.891 "memory_domains": [ 00:15:28.891 { 00:15:28.891 "dma_device_id": "system", 00:15:28.891 "dma_device_type": 1 00:15:28.891 }, 00:15:28.891 { 00:15:28.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.891 "dma_device_type": 2 00:15:28.891 } 00:15:28.891 ], 00:15:28.891 "driver_specific": {} 00:15:28.891 } 00:15:28.891 ] 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.891 [2024-11-27 14:14:59.605808] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.891 [2024-11-27 14:14:59.605896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.891 [2024-11-27 14:14:59.605924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.891 [2024-11-27 14:14:59.607954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.891 [2024-11-27 14:14:59.608011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.891 "name": "Existed_Raid", 00:15:28.891 "uuid": "719bd798-29db-489a-8ff3-a0d4b1397ea2", 00:15:28.891 "strip_size_kb": 64, 00:15:28.891 "state": "configuring", 00:15:28.891 "raid_level": "concat", 00:15:28.891 "superblock": true, 00:15:28.891 "num_base_bdevs": 4, 00:15:28.891 "num_base_bdevs_discovered": 3, 00:15:28.891 "num_base_bdevs_operational": 4, 00:15:28.891 "base_bdevs_list": [ 00:15:28.891 { 00:15:28.891 "name": "BaseBdev1", 00:15:28.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.891 "is_configured": false, 00:15:28.891 "data_offset": 0, 00:15:28.891 "data_size": 0 00:15:28.891 }, 00:15:28.891 { 00:15:28.891 "name": "BaseBdev2", 00:15:28.891 "uuid": "3f8cf44e-ca1b-4e51-b920-564f180f337a", 00:15:28.891 "is_configured": true, 00:15:28.891 "data_offset": 2048, 00:15:28.891 "data_size": 63488 00:15:28.891 }, 00:15:28.891 { 00:15:28.891 "name": "BaseBdev3", 00:15:28.891 "uuid": "815eaaec-c46b-4d36-9f92-a110dac7a3db", 00:15:28.891 "is_configured": true, 00:15:28.891 "data_offset": 2048, 00:15:28.891 "data_size": 63488 00:15:28.891 }, 00:15:28.891 { 00:15:28.891 "name": "BaseBdev4", 00:15:28.891 "uuid": "24019fe2-8267-419b-963b-ee29ab1a8bd6", 00:15:28.891 "is_configured": true, 00:15:28.891 "data_offset": 2048, 00:15:28.891 "data_size": 63488 00:15:28.891 } 00:15:28.891 ] 00:15:28.891 }' 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.891 14:14:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.151 [2024-11-27 14:15:00.053073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.151 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.411 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.411 "name": "Existed_Raid", 00:15:29.411 "uuid": "719bd798-29db-489a-8ff3-a0d4b1397ea2", 00:15:29.411 "strip_size_kb": 64, 00:15:29.411 "state": "configuring", 00:15:29.411 "raid_level": "concat", 00:15:29.411 "superblock": true, 00:15:29.411 "num_base_bdevs": 4, 00:15:29.411 "num_base_bdevs_discovered": 2, 00:15:29.411 "num_base_bdevs_operational": 4, 00:15:29.411 "base_bdevs_list": [ 00:15:29.411 { 00:15:29.411 "name": "BaseBdev1", 00:15:29.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.411 "is_configured": false, 00:15:29.411 "data_offset": 0, 00:15:29.411 "data_size": 0 00:15:29.411 }, 00:15:29.411 { 00:15:29.411 "name": null, 00:15:29.411 "uuid": "3f8cf44e-ca1b-4e51-b920-564f180f337a", 00:15:29.411 "is_configured": false, 00:15:29.411 "data_offset": 0, 00:15:29.411 "data_size": 63488 00:15:29.411 }, 00:15:29.411 { 00:15:29.411 "name": "BaseBdev3", 00:15:29.411 "uuid": "815eaaec-c46b-4d36-9f92-a110dac7a3db", 00:15:29.411 "is_configured": true, 00:15:29.411 "data_offset": 2048, 00:15:29.411 "data_size": 63488 00:15:29.411 }, 00:15:29.411 { 00:15:29.411 "name": "BaseBdev4", 00:15:29.411 "uuid": "24019fe2-8267-419b-963b-ee29ab1a8bd6", 00:15:29.411 "is_configured": true, 00:15:29.411 "data_offset": 2048, 00:15:29.411 "data_size": 63488 00:15:29.411 } 00:15:29.411 ] 00:15:29.411 }' 00:15:29.411 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.411 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.676 [2024-11-27 14:15:00.579679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.676 BaseBdev1 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.676 [ 00:15:29.676 { 00:15:29.676 "name": "BaseBdev1", 00:15:29.676 "aliases": [ 00:15:29.676 "872e71e0-8fe6-4c25-86d9-934fb42c75a2" 00:15:29.676 ], 00:15:29.676 "product_name": "Malloc disk", 00:15:29.676 "block_size": 512, 00:15:29.676 "num_blocks": 65536, 00:15:29.676 "uuid": "872e71e0-8fe6-4c25-86d9-934fb42c75a2", 00:15:29.676 "assigned_rate_limits": { 00:15:29.676 "rw_ios_per_sec": 0, 00:15:29.676 "rw_mbytes_per_sec": 0, 00:15:29.676 "r_mbytes_per_sec": 0, 00:15:29.676 "w_mbytes_per_sec": 0 00:15:29.676 }, 00:15:29.676 "claimed": true, 00:15:29.676 "claim_type": "exclusive_write", 00:15:29.676 "zoned": false, 00:15:29.676 "supported_io_types": { 00:15:29.676 "read": true, 00:15:29.676 "write": true, 00:15:29.676 "unmap": true, 00:15:29.676 "flush": true, 00:15:29.676 "reset": true, 00:15:29.676 "nvme_admin": false, 00:15:29.676 "nvme_io": false, 00:15:29.676 "nvme_io_md": false, 00:15:29.676 "write_zeroes": true, 00:15:29.676 "zcopy": true, 00:15:29.676 "get_zone_info": false, 00:15:29.676 "zone_management": false, 00:15:29.676 "zone_append": false, 00:15:29.676 "compare": false, 00:15:29.676 "compare_and_write": false, 00:15:29.676 "abort": true, 00:15:29.676 "seek_hole": false, 00:15:29.676 "seek_data": false, 00:15:29.676 "copy": true, 00:15:29.676 "nvme_iov_md": false 00:15:29.676 }, 00:15:29.676 "memory_domains": [ 00:15:29.676 { 00:15:29.676 "dma_device_id": "system", 00:15:29.676 "dma_device_type": 1 00:15:29.676 }, 00:15:29.676 { 00:15:29.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.676 "dma_device_type": 2 00:15:29.676 } 00:15:29.676 ], 00:15:29.676 "driver_specific": {} 00:15:29.676 } 00:15:29.676 ] 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.676 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.943 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.944 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.944 "name": "Existed_Raid", 00:15:29.944 "uuid": "719bd798-29db-489a-8ff3-a0d4b1397ea2", 00:15:29.944 "strip_size_kb": 64, 00:15:29.944 "state": "configuring", 00:15:29.944 "raid_level": "concat", 00:15:29.944 "superblock": true, 00:15:29.944 "num_base_bdevs": 4, 00:15:29.944 "num_base_bdevs_discovered": 3, 00:15:29.944 "num_base_bdevs_operational": 4, 00:15:29.944 "base_bdevs_list": [ 00:15:29.944 { 00:15:29.944 "name": "BaseBdev1", 00:15:29.944 "uuid": "872e71e0-8fe6-4c25-86d9-934fb42c75a2", 00:15:29.944 "is_configured": true, 00:15:29.944 "data_offset": 2048, 00:15:29.944 "data_size": 63488 00:15:29.944 }, 00:15:29.944 { 00:15:29.944 "name": null, 00:15:29.944 "uuid": "3f8cf44e-ca1b-4e51-b920-564f180f337a", 00:15:29.944 "is_configured": false, 00:15:29.944 "data_offset": 0, 00:15:29.944 "data_size": 63488 00:15:29.944 }, 00:15:29.944 { 00:15:29.944 "name": "BaseBdev3", 00:15:29.944 "uuid": "815eaaec-c46b-4d36-9f92-a110dac7a3db", 00:15:29.944 "is_configured": true, 00:15:29.944 "data_offset": 2048, 00:15:29.944 "data_size": 63488 00:15:29.944 }, 00:15:29.944 { 00:15:29.944 "name": "BaseBdev4", 00:15:29.944 "uuid": "24019fe2-8267-419b-963b-ee29ab1a8bd6", 00:15:29.944 "is_configured": true, 00:15:29.944 "data_offset": 2048, 00:15:29.944 "data_size": 63488 00:15:29.944 } 00:15:29.944 ] 00:15:29.944 }' 00:15:29.944 14:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.944 14:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.204 [2024-11-27 14:15:01.058974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.204 "name": "Existed_Raid", 00:15:30.204 "uuid": "719bd798-29db-489a-8ff3-a0d4b1397ea2", 00:15:30.204 "strip_size_kb": 64, 00:15:30.204 "state": "configuring", 00:15:30.204 "raid_level": "concat", 00:15:30.204 "superblock": true, 00:15:30.204 "num_base_bdevs": 4, 00:15:30.204 "num_base_bdevs_discovered": 2, 00:15:30.204 "num_base_bdevs_operational": 4, 00:15:30.204 "base_bdevs_list": [ 00:15:30.204 { 00:15:30.204 "name": "BaseBdev1", 00:15:30.204 "uuid": "872e71e0-8fe6-4c25-86d9-934fb42c75a2", 00:15:30.204 "is_configured": true, 00:15:30.204 "data_offset": 2048, 00:15:30.204 "data_size": 63488 00:15:30.204 }, 00:15:30.204 { 00:15:30.204 "name": null, 00:15:30.204 "uuid": "3f8cf44e-ca1b-4e51-b920-564f180f337a", 00:15:30.204 "is_configured": false, 00:15:30.204 "data_offset": 0, 00:15:30.204 "data_size": 63488 00:15:30.204 }, 00:15:30.204 { 00:15:30.204 "name": null, 00:15:30.204 "uuid": "815eaaec-c46b-4d36-9f92-a110dac7a3db", 00:15:30.204 "is_configured": false, 00:15:30.204 "data_offset": 0, 00:15:30.204 "data_size": 63488 00:15:30.204 }, 00:15:30.204 { 00:15:30.204 "name": "BaseBdev4", 00:15:30.204 "uuid": "24019fe2-8267-419b-963b-ee29ab1a8bd6", 00:15:30.204 "is_configured": true, 00:15:30.204 "data_offset": 2048, 00:15:30.204 "data_size": 63488 00:15:30.204 } 00:15:30.204 ] 00:15:30.204 }' 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.204 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.772 [2024-11-27 14:15:01.514181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.772 "name": "Existed_Raid", 00:15:30.772 "uuid": "719bd798-29db-489a-8ff3-a0d4b1397ea2", 00:15:30.772 "strip_size_kb": 64, 00:15:30.772 "state": "configuring", 00:15:30.772 "raid_level": "concat", 00:15:30.772 "superblock": true, 00:15:30.772 "num_base_bdevs": 4, 00:15:30.772 "num_base_bdevs_discovered": 3, 00:15:30.772 "num_base_bdevs_operational": 4, 00:15:30.772 "base_bdevs_list": [ 00:15:30.772 { 00:15:30.772 "name": "BaseBdev1", 00:15:30.772 "uuid": "872e71e0-8fe6-4c25-86d9-934fb42c75a2", 00:15:30.772 "is_configured": true, 00:15:30.772 "data_offset": 2048, 00:15:30.772 "data_size": 63488 00:15:30.772 }, 00:15:30.772 { 00:15:30.772 "name": null, 00:15:30.772 "uuid": "3f8cf44e-ca1b-4e51-b920-564f180f337a", 00:15:30.772 "is_configured": false, 00:15:30.772 "data_offset": 0, 00:15:30.772 "data_size": 63488 00:15:30.772 }, 00:15:30.772 { 00:15:30.772 "name": "BaseBdev3", 00:15:30.772 "uuid": "815eaaec-c46b-4d36-9f92-a110dac7a3db", 00:15:30.772 "is_configured": true, 00:15:30.772 "data_offset": 2048, 00:15:30.772 "data_size": 63488 00:15:30.772 }, 00:15:30.772 { 00:15:30.772 "name": "BaseBdev4", 00:15:30.772 "uuid": "24019fe2-8267-419b-963b-ee29ab1a8bd6", 00:15:30.772 "is_configured": true, 00:15:30.772 "data_offset": 2048, 00:15:30.772 "data_size": 63488 00:15:30.772 } 00:15:30.772 ] 00:15:30.772 }' 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.772 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.032 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.032 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.032 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.032 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.032 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.292 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:31.292 14:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:31.292 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.292 14:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.292 [2024-11-27 14:15:01.997450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.292 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.293 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.293 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.293 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.293 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.293 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.293 "name": "Existed_Raid", 00:15:31.293 "uuid": "719bd798-29db-489a-8ff3-a0d4b1397ea2", 00:15:31.293 "strip_size_kb": 64, 00:15:31.293 "state": "configuring", 00:15:31.293 "raid_level": "concat", 00:15:31.293 "superblock": true, 00:15:31.293 "num_base_bdevs": 4, 00:15:31.293 "num_base_bdevs_discovered": 2, 00:15:31.293 "num_base_bdevs_operational": 4, 00:15:31.293 "base_bdevs_list": [ 00:15:31.293 { 00:15:31.293 "name": null, 00:15:31.293 "uuid": "872e71e0-8fe6-4c25-86d9-934fb42c75a2", 00:15:31.293 "is_configured": false, 00:15:31.293 "data_offset": 0, 00:15:31.293 "data_size": 63488 00:15:31.293 }, 00:15:31.293 { 00:15:31.293 "name": null, 00:15:31.293 "uuid": "3f8cf44e-ca1b-4e51-b920-564f180f337a", 00:15:31.293 "is_configured": false, 00:15:31.293 "data_offset": 0, 00:15:31.293 "data_size": 63488 00:15:31.293 }, 00:15:31.293 { 00:15:31.293 "name": "BaseBdev3", 00:15:31.293 "uuid": "815eaaec-c46b-4d36-9f92-a110dac7a3db", 00:15:31.293 "is_configured": true, 00:15:31.293 "data_offset": 2048, 00:15:31.293 "data_size": 63488 00:15:31.293 }, 00:15:31.293 { 00:15:31.293 "name": "BaseBdev4", 00:15:31.293 "uuid": "24019fe2-8267-419b-963b-ee29ab1a8bd6", 00:15:31.293 "is_configured": true, 00:15:31.293 "data_offset": 2048, 00:15:31.293 "data_size": 63488 00:15:31.293 } 00:15:31.293 ] 00:15:31.293 }' 00:15:31.293 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.293 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.860 [2024-11-27 14:15:02.631674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:31.860 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.861 "name": "Existed_Raid", 00:15:31.861 "uuid": "719bd798-29db-489a-8ff3-a0d4b1397ea2", 00:15:31.861 "strip_size_kb": 64, 00:15:31.861 "state": "configuring", 00:15:31.861 "raid_level": "concat", 00:15:31.861 "superblock": true, 00:15:31.861 "num_base_bdevs": 4, 00:15:31.861 "num_base_bdevs_discovered": 3, 00:15:31.861 "num_base_bdevs_operational": 4, 00:15:31.861 "base_bdevs_list": [ 00:15:31.861 { 00:15:31.861 "name": null, 00:15:31.861 "uuid": "872e71e0-8fe6-4c25-86d9-934fb42c75a2", 00:15:31.861 "is_configured": false, 00:15:31.861 "data_offset": 0, 00:15:31.861 "data_size": 63488 00:15:31.861 }, 00:15:31.861 { 00:15:31.861 "name": "BaseBdev2", 00:15:31.861 "uuid": "3f8cf44e-ca1b-4e51-b920-564f180f337a", 00:15:31.861 "is_configured": true, 00:15:31.861 "data_offset": 2048, 00:15:31.861 "data_size": 63488 00:15:31.861 }, 00:15:31.861 { 00:15:31.861 "name": "BaseBdev3", 00:15:31.861 "uuid": "815eaaec-c46b-4d36-9f92-a110dac7a3db", 00:15:31.861 "is_configured": true, 00:15:31.861 "data_offset": 2048, 00:15:31.861 "data_size": 63488 00:15:31.861 }, 00:15:31.861 { 00:15:31.861 "name": "BaseBdev4", 00:15:31.861 "uuid": "24019fe2-8267-419b-963b-ee29ab1a8bd6", 00:15:31.861 "is_configured": true, 00:15:31.861 "data_offset": 2048, 00:15:31.861 "data_size": 63488 00:15:31.861 } 00:15:31.861 ] 00:15:31.861 }' 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.861 14:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.120 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.120 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.120 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.120 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:32.120 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 872e71e0-8fe6-4c25-86d9-934fb42c75a2 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.379 [2024-11-27 14:15:03.179768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:32.379 [2024-11-27 14:15:03.180112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:32.379 [2024-11-27 14:15:03.180207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:32.379 [2024-11-27 14:15:03.180526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:32.379 [2024-11-27 14:15:03.180719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:32.379 [2024-11-27 14:15:03.180765] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:15:32.379 id_bdev 0x617000008200 00:15:32.379 [2024-11-27 14:15:03.180960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.379 [ 00:15:32.379 { 00:15:32.379 "name": "NewBaseBdev", 00:15:32.379 "aliases": [ 00:15:32.379 "872e71e0-8fe6-4c25-86d9-934fb42c75a2" 00:15:32.379 ], 00:15:32.379 "product_name": "Malloc disk", 00:15:32.379 "block_size": 512, 00:15:32.379 "num_blocks": 65536, 00:15:32.379 "uuid": "872e71e0-8fe6-4c25-86d9-934fb42c75a2", 00:15:32.379 "assigned_rate_limits": { 00:15:32.379 "rw_ios_per_sec": 0, 00:15:32.379 "rw_mbytes_per_sec": 0, 00:15:32.379 "r_mbytes_per_sec": 0, 00:15:32.379 "w_mbytes_per_sec": 0 00:15:32.379 }, 00:15:32.379 "claimed": true, 00:15:32.379 "claim_type": "exclusive_write", 00:15:32.379 "zoned": false, 00:15:32.379 "supported_io_types": { 00:15:32.379 "read": true, 00:15:32.379 "write": true, 00:15:32.379 "unmap": true, 00:15:32.379 "flush": true, 00:15:32.379 "reset": true, 00:15:32.379 "nvme_admin": false, 00:15:32.379 "nvme_io": false, 00:15:32.379 "nvme_io_md": false, 00:15:32.379 "write_zeroes": true, 00:15:32.379 "zcopy": true, 00:15:32.379 "get_zone_info": false, 00:15:32.379 "zone_management": false, 00:15:32.379 "zone_append": false, 00:15:32.379 "compare": false, 00:15:32.379 "compare_and_write": false, 00:15:32.379 "abort": true, 00:15:32.379 "seek_hole": false, 00:15:32.379 "seek_data": false, 00:15:32.379 "copy": true, 00:15:32.379 "nvme_iov_md": false 00:15:32.379 }, 00:15:32.379 "memory_domains": [ 00:15:32.379 { 00:15:32.379 "dma_device_id": "system", 00:15:32.379 "dma_device_type": 1 00:15:32.379 }, 00:15:32.379 { 00:15:32.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.379 "dma_device_type": 2 00:15:32.379 } 00:15:32.379 ], 00:15:32.379 "driver_specific": {} 00:15:32.379 } 00:15:32.379 ] 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.379 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.380 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.380 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.380 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.380 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.380 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.380 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.380 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.380 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.380 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.380 "name": "Existed_Raid", 00:15:32.380 "uuid": "719bd798-29db-489a-8ff3-a0d4b1397ea2", 00:15:32.380 "strip_size_kb": 64, 00:15:32.380 "state": "online", 00:15:32.380 "raid_level": "concat", 00:15:32.380 "superblock": true, 00:15:32.380 "num_base_bdevs": 4, 00:15:32.380 "num_base_bdevs_discovered": 4, 00:15:32.380 "num_base_bdevs_operational": 4, 00:15:32.380 "base_bdevs_list": [ 00:15:32.380 { 00:15:32.380 "name": "NewBaseBdev", 00:15:32.380 "uuid": "872e71e0-8fe6-4c25-86d9-934fb42c75a2", 00:15:32.380 "is_configured": true, 00:15:32.380 "data_offset": 2048, 00:15:32.380 "data_size": 63488 00:15:32.380 }, 00:15:32.380 { 00:15:32.380 "name": "BaseBdev2", 00:15:32.380 "uuid": "3f8cf44e-ca1b-4e51-b920-564f180f337a", 00:15:32.380 "is_configured": true, 00:15:32.380 "data_offset": 2048, 00:15:32.380 "data_size": 63488 00:15:32.380 }, 00:15:32.380 { 00:15:32.380 "name": "BaseBdev3", 00:15:32.380 "uuid": "815eaaec-c46b-4d36-9f92-a110dac7a3db", 00:15:32.380 "is_configured": true, 00:15:32.380 "data_offset": 2048, 00:15:32.380 "data_size": 63488 00:15:32.380 }, 00:15:32.380 { 00:15:32.380 "name": "BaseBdev4", 00:15:32.380 "uuid": "24019fe2-8267-419b-963b-ee29ab1a8bd6", 00:15:32.380 "is_configured": true, 00:15:32.380 "data_offset": 2048, 00:15:32.380 "data_size": 63488 00:15:32.380 } 00:15:32.380 ] 00:15:32.380 }' 00:15:32.380 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.380 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.947 [2024-11-27 14:15:03.671320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.947 "name": "Existed_Raid", 00:15:32.947 "aliases": [ 00:15:32.947 "719bd798-29db-489a-8ff3-a0d4b1397ea2" 00:15:32.947 ], 00:15:32.947 "product_name": "Raid Volume", 00:15:32.947 "block_size": 512, 00:15:32.947 "num_blocks": 253952, 00:15:32.947 "uuid": "719bd798-29db-489a-8ff3-a0d4b1397ea2", 00:15:32.947 "assigned_rate_limits": { 00:15:32.947 "rw_ios_per_sec": 0, 00:15:32.947 "rw_mbytes_per_sec": 0, 00:15:32.947 "r_mbytes_per_sec": 0, 00:15:32.947 "w_mbytes_per_sec": 0 00:15:32.947 }, 00:15:32.947 "claimed": false, 00:15:32.947 "zoned": false, 00:15:32.947 "supported_io_types": { 00:15:32.947 "read": true, 00:15:32.947 "write": true, 00:15:32.947 "unmap": true, 00:15:32.947 "flush": true, 00:15:32.947 "reset": true, 00:15:32.947 "nvme_admin": false, 00:15:32.947 "nvme_io": false, 00:15:32.947 "nvme_io_md": false, 00:15:32.947 "write_zeroes": true, 00:15:32.947 "zcopy": false, 00:15:32.947 "get_zone_info": false, 00:15:32.947 "zone_management": false, 00:15:32.947 "zone_append": false, 00:15:32.947 "compare": false, 00:15:32.947 "compare_and_write": false, 00:15:32.947 "abort": false, 00:15:32.947 "seek_hole": false, 00:15:32.947 "seek_data": false, 00:15:32.947 "copy": false, 00:15:32.947 "nvme_iov_md": false 00:15:32.947 }, 00:15:32.947 "memory_domains": [ 00:15:32.947 { 00:15:32.947 "dma_device_id": "system", 00:15:32.947 "dma_device_type": 1 00:15:32.947 }, 00:15:32.947 { 00:15:32.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.947 "dma_device_type": 2 00:15:32.947 }, 00:15:32.947 { 00:15:32.947 "dma_device_id": "system", 00:15:32.947 "dma_device_type": 1 00:15:32.947 }, 00:15:32.947 { 00:15:32.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.947 "dma_device_type": 2 00:15:32.947 }, 00:15:32.947 { 00:15:32.947 "dma_device_id": "system", 00:15:32.947 "dma_device_type": 1 00:15:32.947 }, 00:15:32.947 { 00:15:32.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.947 "dma_device_type": 2 00:15:32.947 }, 00:15:32.947 { 00:15:32.947 "dma_device_id": "system", 00:15:32.947 "dma_device_type": 1 00:15:32.947 }, 00:15:32.947 { 00:15:32.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.947 "dma_device_type": 2 00:15:32.947 } 00:15:32.947 ], 00:15:32.947 "driver_specific": { 00:15:32.947 "raid": { 00:15:32.947 "uuid": "719bd798-29db-489a-8ff3-a0d4b1397ea2", 00:15:32.947 "strip_size_kb": 64, 00:15:32.947 "state": "online", 00:15:32.947 "raid_level": "concat", 00:15:32.947 "superblock": true, 00:15:32.947 "num_base_bdevs": 4, 00:15:32.947 "num_base_bdevs_discovered": 4, 00:15:32.947 "num_base_bdevs_operational": 4, 00:15:32.947 "base_bdevs_list": [ 00:15:32.947 { 00:15:32.947 "name": "NewBaseBdev", 00:15:32.947 "uuid": "872e71e0-8fe6-4c25-86d9-934fb42c75a2", 00:15:32.947 "is_configured": true, 00:15:32.947 "data_offset": 2048, 00:15:32.947 "data_size": 63488 00:15:32.947 }, 00:15:32.947 { 00:15:32.947 "name": "BaseBdev2", 00:15:32.947 "uuid": "3f8cf44e-ca1b-4e51-b920-564f180f337a", 00:15:32.947 "is_configured": true, 00:15:32.947 "data_offset": 2048, 00:15:32.947 "data_size": 63488 00:15:32.947 }, 00:15:32.947 { 00:15:32.947 "name": "BaseBdev3", 00:15:32.947 "uuid": "815eaaec-c46b-4d36-9f92-a110dac7a3db", 00:15:32.947 "is_configured": true, 00:15:32.947 "data_offset": 2048, 00:15:32.947 "data_size": 63488 00:15:32.947 }, 00:15:32.947 { 00:15:32.947 "name": "BaseBdev4", 00:15:32.947 "uuid": "24019fe2-8267-419b-963b-ee29ab1a8bd6", 00:15:32.947 "is_configured": true, 00:15:32.947 "data_offset": 2048, 00:15:32.947 "data_size": 63488 00:15:32.947 } 00:15:32.947 ] 00:15:32.947 } 00:15:32.947 } 00:15:32.947 }' 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:32.947 BaseBdev2 00:15:32.947 BaseBdev3 00:15:32.947 BaseBdev4' 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.947 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.206 [2024-11-27 14:15:03.974426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.206 [2024-11-27 14:15:03.974496] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.206 [2024-11-27 14:15:03.974603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.206 [2024-11-27 14:15:03.974711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.206 [2024-11-27 14:15:03.974780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72189 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72189 ']' 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72189 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.206 14:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72189 00:15:33.206 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.206 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.206 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72189' 00:15:33.206 killing process with pid 72189 00:15:33.206 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72189 00:15:33.206 [2024-11-27 14:15:04.017399] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.206 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72189 00:15:33.776 [2024-11-27 14:15:04.428998] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:34.714 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:34.714 ************************************ 00:15:34.714 END TEST raid_state_function_test_sb 00:15:34.714 ************************************ 00:15:34.714 00:15:34.714 real 0m11.484s 00:15:34.714 user 0m18.236s 00:15:34.714 sys 0m2.019s 00:15:34.714 14:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.714 14:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.714 14:15:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:15:34.714 14:15:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:34.714 14:15:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.714 14:15:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:34.714 ************************************ 00:15:34.714 START TEST raid_superblock_test 00:15:34.714 ************************************ 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72859 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72859 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72859 ']' 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.714 14:15:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.974 [2024-11-27 14:15:05.734440] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:34.974 [2024-11-27 14:15:05.734701] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72859 ] 00:15:34.974 [2024-11-27 14:15:05.911131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.234 [2024-11-27 14:15:06.058512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.493 [2024-11-27 14:15:06.293045] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.493 [2024-11-27 14:15:06.293227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.753 malloc1 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.753 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.753 [2024-11-27 14:15:06.664993] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.753 [2024-11-27 14:15:06.665098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.754 [2024-11-27 14:15:06.665138] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:35.754 [2024-11-27 14:15:06.665149] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.754 [2024-11-27 14:15:06.667250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.754 [2024-11-27 14:15:06.667286] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.754 pt1 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.754 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.014 malloc2 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.014 [2024-11-27 14:15:06.721378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.014 [2024-11-27 14:15:06.721496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.014 [2024-11-27 14:15:06.721540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:36.014 [2024-11-27 14:15:06.721569] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.014 [2024-11-27 14:15:06.723699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.014 [2024-11-27 14:15:06.723785] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.014 pt2 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.014 malloc3 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.014 [2024-11-27 14:15:06.791027] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:36.014 [2024-11-27 14:15:06.791158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.014 [2024-11-27 14:15:06.791205] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:36.014 [2024-11-27 14:15:06.791244] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.014 [2024-11-27 14:15:06.793556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.014 [2024-11-27 14:15:06.793656] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:36.014 pt3 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.014 malloc4 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.014 [2024-11-27 14:15:06.850189] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:36.014 [2024-11-27 14:15:06.850251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.014 [2024-11-27 14:15:06.850272] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:36.014 [2024-11-27 14:15:06.850281] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.014 [2024-11-27 14:15:06.852569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.014 [2024-11-27 14:15:06.852610] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:36.014 pt4 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.014 [2024-11-27 14:15:06.862198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:36.014 [2024-11-27 14:15:06.864054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.014 [2024-11-27 14:15:06.864163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:36.014 [2024-11-27 14:15:06.864220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:36.014 [2024-11-27 14:15:06.864447] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:36.014 [2024-11-27 14:15:06.864467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:36.014 [2024-11-27 14:15:06.864772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:36.014 [2024-11-27 14:15:06.864971] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:36.014 [2024-11-27 14:15:06.864986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:36.014 [2024-11-27 14:15:06.865204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.014 "name": "raid_bdev1", 00:15:36.014 "uuid": "4ec04d63-0ca0-43ad-ae5f-54a29a851295", 00:15:36.014 "strip_size_kb": 64, 00:15:36.014 "state": "online", 00:15:36.014 "raid_level": "concat", 00:15:36.014 "superblock": true, 00:15:36.014 "num_base_bdevs": 4, 00:15:36.014 "num_base_bdevs_discovered": 4, 00:15:36.014 "num_base_bdevs_operational": 4, 00:15:36.014 "base_bdevs_list": [ 00:15:36.014 { 00:15:36.014 "name": "pt1", 00:15:36.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.014 "is_configured": true, 00:15:36.014 "data_offset": 2048, 00:15:36.014 "data_size": 63488 00:15:36.014 }, 00:15:36.014 { 00:15:36.014 "name": "pt2", 00:15:36.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.014 "is_configured": true, 00:15:36.014 "data_offset": 2048, 00:15:36.014 "data_size": 63488 00:15:36.014 }, 00:15:36.014 { 00:15:36.014 "name": "pt3", 00:15:36.014 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.014 "is_configured": true, 00:15:36.014 "data_offset": 2048, 00:15:36.014 "data_size": 63488 00:15:36.014 }, 00:15:36.014 { 00:15:36.014 "name": "pt4", 00:15:36.014 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:36.014 "is_configured": true, 00:15:36.014 "data_offset": 2048, 00:15:36.014 "data_size": 63488 00:15:36.014 } 00:15:36.014 ] 00:15:36.014 }' 00:15:36.014 14:15:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.015 14:15:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.584 [2024-11-27 14:15:07.373659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.584 "name": "raid_bdev1", 00:15:36.584 "aliases": [ 00:15:36.584 "4ec04d63-0ca0-43ad-ae5f-54a29a851295" 00:15:36.584 ], 00:15:36.584 "product_name": "Raid Volume", 00:15:36.584 "block_size": 512, 00:15:36.584 "num_blocks": 253952, 00:15:36.584 "uuid": "4ec04d63-0ca0-43ad-ae5f-54a29a851295", 00:15:36.584 "assigned_rate_limits": { 00:15:36.584 "rw_ios_per_sec": 0, 00:15:36.584 "rw_mbytes_per_sec": 0, 00:15:36.584 "r_mbytes_per_sec": 0, 00:15:36.584 "w_mbytes_per_sec": 0 00:15:36.584 }, 00:15:36.584 "claimed": false, 00:15:36.584 "zoned": false, 00:15:36.584 "supported_io_types": { 00:15:36.584 "read": true, 00:15:36.584 "write": true, 00:15:36.584 "unmap": true, 00:15:36.584 "flush": true, 00:15:36.584 "reset": true, 00:15:36.584 "nvme_admin": false, 00:15:36.584 "nvme_io": false, 00:15:36.584 "nvme_io_md": false, 00:15:36.584 "write_zeroes": true, 00:15:36.584 "zcopy": false, 00:15:36.584 "get_zone_info": false, 00:15:36.584 "zone_management": false, 00:15:36.584 "zone_append": false, 00:15:36.584 "compare": false, 00:15:36.584 "compare_and_write": false, 00:15:36.584 "abort": false, 00:15:36.584 "seek_hole": false, 00:15:36.584 "seek_data": false, 00:15:36.584 "copy": false, 00:15:36.584 "nvme_iov_md": false 00:15:36.584 }, 00:15:36.584 "memory_domains": [ 00:15:36.584 { 00:15:36.584 "dma_device_id": "system", 00:15:36.584 "dma_device_type": 1 00:15:36.584 }, 00:15:36.584 { 00:15:36.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.584 "dma_device_type": 2 00:15:36.584 }, 00:15:36.584 { 00:15:36.584 "dma_device_id": "system", 00:15:36.584 "dma_device_type": 1 00:15:36.584 }, 00:15:36.584 { 00:15:36.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.584 "dma_device_type": 2 00:15:36.584 }, 00:15:36.584 { 00:15:36.584 "dma_device_id": "system", 00:15:36.584 "dma_device_type": 1 00:15:36.584 }, 00:15:36.584 { 00:15:36.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.584 "dma_device_type": 2 00:15:36.584 }, 00:15:36.584 { 00:15:36.584 "dma_device_id": "system", 00:15:36.584 "dma_device_type": 1 00:15:36.584 }, 00:15:36.584 { 00:15:36.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.584 "dma_device_type": 2 00:15:36.584 } 00:15:36.584 ], 00:15:36.584 "driver_specific": { 00:15:36.584 "raid": { 00:15:36.584 "uuid": "4ec04d63-0ca0-43ad-ae5f-54a29a851295", 00:15:36.584 "strip_size_kb": 64, 00:15:36.584 "state": "online", 00:15:36.584 "raid_level": "concat", 00:15:36.584 "superblock": true, 00:15:36.584 "num_base_bdevs": 4, 00:15:36.584 "num_base_bdevs_discovered": 4, 00:15:36.584 "num_base_bdevs_operational": 4, 00:15:36.584 "base_bdevs_list": [ 00:15:36.584 { 00:15:36.584 "name": "pt1", 00:15:36.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.584 "is_configured": true, 00:15:36.584 "data_offset": 2048, 00:15:36.584 "data_size": 63488 00:15:36.584 }, 00:15:36.584 { 00:15:36.584 "name": "pt2", 00:15:36.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.584 "is_configured": true, 00:15:36.584 "data_offset": 2048, 00:15:36.584 "data_size": 63488 00:15:36.584 }, 00:15:36.584 { 00:15:36.584 "name": "pt3", 00:15:36.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.584 "is_configured": true, 00:15:36.584 "data_offset": 2048, 00:15:36.584 "data_size": 63488 00:15:36.584 }, 00:15:36.584 { 00:15:36.584 "name": "pt4", 00:15:36.584 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:36.584 "is_configured": true, 00:15:36.584 "data_offset": 2048, 00:15:36.584 "data_size": 63488 00:15:36.584 } 00:15:36.584 ] 00:15:36.584 } 00:15:36.584 } 00:15:36.584 }' 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:36.584 pt2 00:15:36.584 pt3 00:15:36.584 pt4' 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.584 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.844 [2024-11-27 14:15:07.717061] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4ec04d63-0ca0-43ad-ae5f-54a29a851295 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4ec04d63-0ca0-43ad-ae5f-54a29a851295 ']' 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.844 [2024-11-27 14:15:07.748657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:36.844 [2024-11-27 14:15:07.748684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:36.844 [2024-11-27 14:15:07.748776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.844 [2024-11-27 14:15:07.748858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.844 [2024-11-27 14:15:07.748873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.844 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.148 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.148 [2024-11-27 14:15:07.912450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:37.148 [2024-11-27 14:15:07.914653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:37.148 [2024-11-27 14:15:07.914750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:37.148 [2024-11-27 14:15:07.914823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:37.148 [2024-11-27 14:15:07.914909] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:37.148 [2024-11-27 14:15:07.915011] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:37.148 [2024-11-27 14:15:07.915075] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:37.148 [2024-11-27 14:15:07.915151] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:37.148 [2024-11-27 14:15:07.915204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.148 [2024-11-27 14:15:07.915241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:37.148 request: 00:15:37.148 { 00:15:37.148 "name": "raid_bdev1", 00:15:37.148 "raid_level": "concat", 00:15:37.148 "base_bdevs": [ 00:15:37.148 "malloc1", 00:15:37.148 "malloc2", 00:15:37.148 "malloc3", 00:15:37.148 "malloc4" 00:15:37.148 ], 00:15:37.148 "strip_size_kb": 64, 00:15:37.148 "superblock": false, 00:15:37.148 "method": "bdev_raid_create", 00:15:37.148 "req_id": 1 00:15:37.148 } 00:15:37.148 Got JSON-RPC error response 00:15:37.148 response: 00:15:37.148 { 00:15:37.148 "code": -17, 00:15:37.148 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:37.149 } 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.149 [2024-11-27 14:15:07.980301] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:37.149 [2024-11-27 14:15:07.980381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.149 [2024-11-27 14:15:07.980407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:37.149 [2024-11-27 14:15:07.980420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.149 [2024-11-27 14:15:07.982913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.149 [2024-11-27 14:15:07.982957] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:37.149 [2024-11-27 14:15:07.983057] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:37.149 [2024-11-27 14:15:07.983134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:37.149 pt1 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.149 14:15:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.149 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.149 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.149 "name": "raid_bdev1", 00:15:37.149 "uuid": "4ec04d63-0ca0-43ad-ae5f-54a29a851295", 00:15:37.149 "strip_size_kb": 64, 00:15:37.149 "state": "configuring", 00:15:37.149 "raid_level": "concat", 00:15:37.149 "superblock": true, 00:15:37.149 "num_base_bdevs": 4, 00:15:37.149 "num_base_bdevs_discovered": 1, 00:15:37.149 "num_base_bdevs_operational": 4, 00:15:37.149 "base_bdevs_list": [ 00:15:37.149 { 00:15:37.149 "name": "pt1", 00:15:37.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.149 "is_configured": true, 00:15:37.149 "data_offset": 2048, 00:15:37.149 "data_size": 63488 00:15:37.149 }, 00:15:37.149 { 00:15:37.149 "name": null, 00:15:37.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.149 "is_configured": false, 00:15:37.149 "data_offset": 2048, 00:15:37.149 "data_size": 63488 00:15:37.149 }, 00:15:37.149 { 00:15:37.149 "name": null, 00:15:37.149 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.149 "is_configured": false, 00:15:37.149 "data_offset": 2048, 00:15:37.149 "data_size": 63488 00:15:37.149 }, 00:15:37.149 { 00:15:37.149 "name": null, 00:15:37.149 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:37.149 "is_configured": false, 00:15:37.149 "data_offset": 2048, 00:15:37.149 "data_size": 63488 00:15:37.149 } 00:15:37.149 ] 00:15:37.149 }' 00:15:37.149 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.149 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.716 [2024-11-27 14:15:08.483608] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:37.716 [2024-11-27 14:15:08.483763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.716 [2024-11-27 14:15:08.483820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:37.716 [2024-11-27 14:15:08.483861] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.716 [2024-11-27 14:15:08.484492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.716 [2024-11-27 14:15:08.484563] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:37.716 [2024-11-27 14:15:08.484686] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:37.716 [2024-11-27 14:15:08.484746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.716 pt2 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.716 [2024-11-27 14:15:08.495647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.716 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.716 "name": "raid_bdev1", 00:15:37.716 "uuid": "4ec04d63-0ca0-43ad-ae5f-54a29a851295", 00:15:37.716 "strip_size_kb": 64, 00:15:37.716 "state": "configuring", 00:15:37.716 "raid_level": "concat", 00:15:37.716 "superblock": true, 00:15:37.716 "num_base_bdevs": 4, 00:15:37.716 "num_base_bdevs_discovered": 1, 00:15:37.716 "num_base_bdevs_operational": 4, 00:15:37.716 "base_bdevs_list": [ 00:15:37.716 { 00:15:37.716 "name": "pt1", 00:15:37.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.716 "is_configured": true, 00:15:37.716 "data_offset": 2048, 00:15:37.716 "data_size": 63488 00:15:37.716 }, 00:15:37.716 { 00:15:37.716 "name": null, 00:15:37.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.716 "is_configured": false, 00:15:37.716 "data_offset": 0, 00:15:37.716 "data_size": 63488 00:15:37.716 }, 00:15:37.716 { 00:15:37.716 "name": null, 00:15:37.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.716 "is_configured": false, 00:15:37.716 "data_offset": 2048, 00:15:37.716 "data_size": 63488 00:15:37.716 }, 00:15:37.716 { 00:15:37.716 "name": null, 00:15:37.716 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:37.717 "is_configured": false, 00:15:37.717 "data_offset": 2048, 00:15:37.717 "data_size": 63488 00:15:37.717 } 00:15:37.717 ] 00:15:37.717 }' 00:15:37.717 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.717 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.284 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:38.284 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.285 [2024-11-27 14:15:08.974803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:38.285 [2024-11-27 14:15:08.974879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.285 [2024-11-27 14:15:08.974900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:38.285 [2024-11-27 14:15:08.974910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.285 [2024-11-27 14:15:08.975447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.285 [2024-11-27 14:15:08.975474] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:38.285 [2024-11-27 14:15:08.975566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:38.285 [2024-11-27 14:15:08.975591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:38.285 pt2 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.285 [2024-11-27 14:15:08.986776] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.285 [2024-11-27 14:15:08.986889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.285 [2024-11-27 14:15:08.986916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:38.285 [2024-11-27 14:15:08.986927] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.285 [2024-11-27 14:15:08.987398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.285 [2024-11-27 14:15:08.987419] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.285 [2024-11-27 14:15:08.987495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:38.285 [2024-11-27 14:15:08.987525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:38.285 pt3 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.285 14:15:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.285 [2024-11-27 14:15:08.998729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:38.285 [2024-11-27 14:15:08.998778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.285 [2024-11-27 14:15:08.998797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:38.285 [2024-11-27 14:15:08.998806] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.285 [2024-11-27 14:15:08.999239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.285 [2024-11-27 14:15:08.999269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:38.285 [2024-11-27 14:15:08.999335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:38.285 [2024-11-27 14:15:08.999357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:38.285 [2024-11-27 14:15:08.999487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:38.285 [2024-11-27 14:15:08.999502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:38.285 [2024-11-27 14:15:08.999728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:38.285 [2024-11-27 14:15:08.999875] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:38.285 [2024-11-27 14:15:08.999887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:38.285 [2024-11-27 14:15:09.000015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.285 pt4 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.285 "name": "raid_bdev1", 00:15:38.285 "uuid": "4ec04d63-0ca0-43ad-ae5f-54a29a851295", 00:15:38.285 "strip_size_kb": 64, 00:15:38.285 "state": "online", 00:15:38.285 "raid_level": "concat", 00:15:38.285 "superblock": true, 00:15:38.285 "num_base_bdevs": 4, 00:15:38.285 "num_base_bdevs_discovered": 4, 00:15:38.285 "num_base_bdevs_operational": 4, 00:15:38.285 "base_bdevs_list": [ 00:15:38.285 { 00:15:38.285 "name": "pt1", 00:15:38.285 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.285 "is_configured": true, 00:15:38.285 "data_offset": 2048, 00:15:38.285 "data_size": 63488 00:15:38.285 }, 00:15:38.285 { 00:15:38.285 "name": "pt2", 00:15:38.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.285 "is_configured": true, 00:15:38.285 "data_offset": 2048, 00:15:38.285 "data_size": 63488 00:15:38.285 }, 00:15:38.285 { 00:15:38.285 "name": "pt3", 00:15:38.285 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.285 "is_configured": true, 00:15:38.285 "data_offset": 2048, 00:15:38.285 "data_size": 63488 00:15:38.285 }, 00:15:38.285 { 00:15:38.285 "name": "pt4", 00:15:38.285 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.285 "is_configured": true, 00:15:38.285 "data_offset": 2048, 00:15:38.285 "data_size": 63488 00:15:38.285 } 00:15:38.285 ] 00:15:38.285 }' 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.285 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.852 [2024-11-27 14:15:09.510324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.852 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:38.852 "name": "raid_bdev1", 00:15:38.852 "aliases": [ 00:15:38.852 "4ec04d63-0ca0-43ad-ae5f-54a29a851295" 00:15:38.852 ], 00:15:38.852 "product_name": "Raid Volume", 00:15:38.852 "block_size": 512, 00:15:38.852 "num_blocks": 253952, 00:15:38.852 "uuid": "4ec04d63-0ca0-43ad-ae5f-54a29a851295", 00:15:38.852 "assigned_rate_limits": { 00:15:38.852 "rw_ios_per_sec": 0, 00:15:38.852 "rw_mbytes_per_sec": 0, 00:15:38.852 "r_mbytes_per_sec": 0, 00:15:38.852 "w_mbytes_per_sec": 0 00:15:38.852 }, 00:15:38.852 "claimed": false, 00:15:38.852 "zoned": false, 00:15:38.852 "supported_io_types": { 00:15:38.852 "read": true, 00:15:38.852 "write": true, 00:15:38.852 "unmap": true, 00:15:38.852 "flush": true, 00:15:38.852 "reset": true, 00:15:38.852 "nvme_admin": false, 00:15:38.852 "nvme_io": false, 00:15:38.852 "nvme_io_md": false, 00:15:38.852 "write_zeroes": true, 00:15:38.852 "zcopy": false, 00:15:38.852 "get_zone_info": false, 00:15:38.852 "zone_management": false, 00:15:38.852 "zone_append": false, 00:15:38.852 "compare": false, 00:15:38.852 "compare_and_write": false, 00:15:38.852 "abort": false, 00:15:38.852 "seek_hole": false, 00:15:38.852 "seek_data": false, 00:15:38.852 "copy": false, 00:15:38.852 "nvme_iov_md": false 00:15:38.852 }, 00:15:38.852 "memory_domains": [ 00:15:38.852 { 00:15:38.852 "dma_device_id": "system", 00:15:38.852 "dma_device_type": 1 00:15:38.852 }, 00:15:38.852 { 00:15:38.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.852 "dma_device_type": 2 00:15:38.852 }, 00:15:38.852 { 00:15:38.852 "dma_device_id": "system", 00:15:38.852 "dma_device_type": 1 00:15:38.852 }, 00:15:38.852 { 00:15:38.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.852 "dma_device_type": 2 00:15:38.852 }, 00:15:38.852 { 00:15:38.852 "dma_device_id": "system", 00:15:38.852 "dma_device_type": 1 00:15:38.852 }, 00:15:38.852 { 00:15:38.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.852 "dma_device_type": 2 00:15:38.852 }, 00:15:38.852 { 00:15:38.852 "dma_device_id": "system", 00:15:38.852 "dma_device_type": 1 00:15:38.852 }, 00:15:38.853 { 00:15:38.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.853 "dma_device_type": 2 00:15:38.853 } 00:15:38.853 ], 00:15:38.853 "driver_specific": { 00:15:38.853 "raid": { 00:15:38.853 "uuid": "4ec04d63-0ca0-43ad-ae5f-54a29a851295", 00:15:38.853 "strip_size_kb": 64, 00:15:38.853 "state": "online", 00:15:38.853 "raid_level": "concat", 00:15:38.853 "superblock": true, 00:15:38.853 "num_base_bdevs": 4, 00:15:38.853 "num_base_bdevs_discovered": 4, 00:15:38.853 "num_base_bdevs_operational": 4, 00:15:38.853 "base_bdevs_list": [ 00:15:38.853 { 00:15:38.853 "name": "pt1", 00:15:38.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.853 "is_configured": true, 00:15:38.853 "data_offset": 2048, 00:15:38.853 "data_size": 63488 00:15:38.853 }, 00:15:38.853 { 00:15:38.853 "name": "pt2", 00:15:38.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.853 "is_configured": true, 00:15:38.853 "data_offset": 2048, 00:15:38.853 "data_size": 63488 00:15:38.853 }, 00:15:38.853 { 00:15:38.853 "name": "pt3", 00:15:38.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.853 "is_configured": true, 00:15:38.853 "data_offset": 2048, 00:15:38.853 "data_size": 63488 00:15:38.853 }, 00:15:38.853 { 00:15:38.853 "name": "pt4", 00:15:38.853 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.853 "is_configured": true, 00:15:38.853 "data_offset": 2048, 00:15:38.853 "data_size": 63488 00:15:38.853 } 00:15:38.853 ] 00:15:38.853 } 00:15:38.853 } 00:15:38.853 }' 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:38.853 pt2 00:15:38.853 pt3 00:15:38.853 pt4' 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.853 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.113 [2024-11-27 14:15:09.869675] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4ec04d63-0ca0-43ad-ae5f-54a29a851295 '!=' 4ec04d63-0ca0-43ad-ae5f-54a29a851295 ']' 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72859 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72859 ']' 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72859 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72859 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72859' 00:15:39.113 killing process with pid 72859 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72859 00:15:39.113 [2024-11-27 14:15:09.943879] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.113 [2024-11-27 14:15:09.944065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.113 14:15:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72859 00:15:39.113 [2024-11-27 14:15:09.944211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.113 [2024-11-27 14:15:09.944227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:39.684 [2024-11-27 14:15:10.402530] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.064 14:15:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:41.064 00:15:41.064 real 0m6.038s 00:15:41.064 user 0m8.670s 00:15:41.064 sys 0m0.980s 00:15:41.064 14:15:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.064 14:15:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.064 ************************************ 00:15:41.064 END TEST raid_superblock_test 00:15:41.064 ************************************ 00:15:41.064 14:15:11 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:41.064 14:15:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:41.064 14:15:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.064 14:15:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.064 ************************************ 00:15:41.064 START TEST raid_read_error_test 00:15:41.064 ************************************ 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZOc4oWzdwc 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73124 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73124 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73124 ']' 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.064 14:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.065 14:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.065 14:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.065 14:15:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.065 [2024-11-27 14:15:11.873173] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:41.065 [2024-11-27 14:15:11.873470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73124 ] 00:15:41.323 [2024-11-27 14:15:12.048348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.323 [2024-11-27 14:15:12.178247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.582 [2024-11-27 14:15:12.402400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.582 [2024-11-27 14:15:12.402471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.841 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.841 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:41.841 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:41.841 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:41.841 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.841 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.841 BaseBdev1_malloc 00:15:41.841 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.841 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:41.841 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.841 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 true 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 [2024-11-27 14:15:12.800979] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:42.101 [2024-11-27 14:15:12.801042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.101 [2024-11-27 14:15:12.801071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:42.101 [2024-11-27 14:15:12.801087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.101 [2024-11-27 14:15:12.803424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.101 [2024-11-27 14:15:12.803465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:42.101 BaseBdev1 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 BaseBdev2_malloc 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 true 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 [2024-11-27 14:15:12.869685] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:42.101 [2024-11-27 14:15:12.869852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.101 [2024-11-27 14:15:12.869880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:42.101 [2024-11-27 14:15:12.869892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.101 [2024-11-27 14:15:12.872374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.101 [2024-11-27 14:15:12.872423] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:42.101 BaseBdev2 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 BaseBdev3_malloc 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 true 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 [2024-11-27 14:15:12.953052] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:42.101 [2024-11-27 14:15:12.953147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.101 [2024-11-27 14:15:12.953176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:42.101 [2024-11-27 14:15:12.953189] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.101 [2024-11-27 14:15:12.956228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.101 [2024-11-27 14:15:12.956352] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:42.101 BaseBdev3 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.101 14:15:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 BaseBdev4_malloc 00:15:42.101 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.101 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:42.101 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.101 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.101 true 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.102 [2024-11-27 14:15:13.025549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:42.102 [2024-11-27 14:15:13.025673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.102 [2024-11-27 14:15:13.025719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:42.102 [2024-11-27 14:15:13.025732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.102 [2024-11-27 14:15:13.028311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.102 [2024-11-27 14:15:13.028361] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:42.102 BaseBdev4 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.102 [2024-11-27 14:15:13.037591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.102 [2024-11-27 14:15:13.039607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.102 [2024-11-27 14:15:13.039748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.102 [2024-11-27 14:15:13.039820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:42.102 [2024-11-27 14:15:13.040090] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:42.102 [2024-11-27 14:15:13.040109] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:42.102 [2024-11-27 14:15:13.040427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:42.102 [2024-11-27 14:15:13.040626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:42.102 [2024-11-27 14:15:13.040639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:42.102 [2024-11-27 14:15:13.040829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.102 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.361 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.361 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.361 "name": "raid_bdev1", 00:15:42.361 "uuid": "a583c69c-ab0b-43ec-949c-442b4a67f384", 00:15:42.361 "strip_size_kb": 64, 00:15:42.361 "state": "online", 00:15:42.361 "raid_level": "concat", 00:15:42.361 "superblock": true, 00:15:42.361 "num_base_bdevs": 4, 00:15:42.361 "num_base_bdevs_discovered": 4, 00:15:42.361 "num_base_bdevs_operational": 4, 00:15:42.361 "base_bdevs_list": [ 00:15:42.361 { 00:15:42.361 "name": "BaseBdev1", 00:15:42.361 "uuid": "6281347a-78ba-5182-91ef-660369bb1539", 00:15:42.361 "is_configured": true, 00:15:42.361 "data_offset": 2048, 00:15:42.361 "data_size": 63488 00:15:42.361 }, 00:15:42.361 { 00:15:42.361 "name": "BaseBdev2", 00:15:42.361 "uuid": "0421a814-e693-5fde-af08-5cabca11b619", 00:15:42.361 "is_configured": true, 00:15:42.361 "data_offset": 2048, 00:15:42.361 "data_size": 63488 00:15:42.361 }, 00:15:42.361 { 00:15:42.361 "name": "BaseBdev3", 00:15:42.361 "uuid": "93b9e4dc-4509-5253-aa7b-7efb42813431", 00:15:42.361 "is_configured": true, 00:15:42.361 "data_offset": 2048, 00:15:42.361 "data_size": 63488 00:15:42.361 }, 00:15:42.361 { 00:15:42.361 "name": "BaseBdev4", 00:15:42.361 "uuid": "661097f7-326a-588e-a266-e81b0d7bd9e2", 00:15:42.361 "is_configured": true, 00:15:42.361 "data_offset": 2048, 00:15:42.361 "data_size": 63488 00:15:42.361 } 00:15:42.361 ] 00:15:42.361 }' 00:15:42.361 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.361 14:15:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.621 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:42.621 14:15:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:42.880 [2024-11-27 14:15:13.610106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.820 "name": "raid_bdev1", 00:15:43.820 "uuid": "a583c69c-ab0b-43ec-949c-442b4a67f384", 00:15:43.820 "strip_size_kb": 64, 00:15:43.820 "state": "online", 00:15:43.820 "raid_level": "concat", 00:15:43.820 "superblock": true, 00:15:43.820 "num_base_bdevs": 4, 00:15:43.820 "num_base_bdevs_discovered": 4, 00:15:43.820 "num_base_bdevs_operational": 4, 00:15:43.820 "base_bdevs_list": [ 00:15:43.820 { 00:15:43.820 "name": "BaseBdev1", 00:15:43.820 "uuid": "6281347a-78ba-5182-91ef-660369bb1539", 00:15:43.820 "is_configured": true, 00:15:43.820 "data_offset": 2048, 00:15:43.820 "data_size": 63488 00:15:43.820 }, 00:15:43.820 { 00:15:43.820 "name": "BaseBdev2", 00:15:43.820 "uuid": "0421a814-e693-5fde-af08-5cabca11b619", 00:15:43.820 "is_configured": true, 00:15:43.820 "data_offset": 2048, 00:15:43.820 "data_size": 63488 00:15:43.820 }, 00:15:43.820 { 00:15:43.820 "name": "BaseBdev3", 00:15:43.820 "uuid": "93b9e4dc-4509-5253-aa7b-7efb42813431", 00:15:43.820 "is_configured": true, 00:15:43.820 "data_offset": 2048, 00:15:43.820 "data_size": 63488 00:15:43.820 }, 00:15:43.820 { 00:15:43.820 "name": "BaseBdev4", 00:15:43.820 "uuid": "661097f7-326a-588e-a266-e81b0d7bd9e2", 00:15:43.820 "is_configured": true, 00:15:43.820 "data_offset": 2048, 00:15:43.820 "data_size": 63488 00:15:43.820 } 00:15:43.820 ] 00:15:43.820 }' 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.820 14:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.080 14:15:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:44.080 14:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.080 14:15:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.080 [2024-11-27 14:15:14.999595] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.080 [2024-11-27 14:15:14.999635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.080 [2024-11-27 14:15:15.002836] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.080 { 00:15:44.080 "results": [ 00:15:44.080 { 00:15:44.080 "job": "raid_bdev1", 00:15:44.080 "core_mask": "0x1", 00:15:44.080 "workload": "randrw", 00:15:44.080 "percentage": 50, 00:15:44.080 "status": "finished", 00:15:44.080 "queue_depth": 1, 00:15:44.080 "io_size": 131072, 00:15:44.080 "runtime": 1.390097, 00:15:44.080 "iops": 13337.198771021014, 00:15:44.080 "mibps": 1667.1498463776268, 00:15:44.080 "io_failed": 1, 00:15:44.080 "io_timeout": 0, 00:15:44.080 "avg_latency_us": 103.67303940352656, 00:15:44.080 "min_latency_us": 28.841921397379913, 00:15:44.080 "max_latency_us": 1674.172925764192 00:15:44.080 } 00:15:44.080 ], 00:15:44.080 "core_count": 1 00:15:44.080 } 00:15:44.080 [2024-11-27 14:15:15.002968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.080 [2024-11-27 14:15:15.003024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.080 [2024-11-27 14:15:15.003039] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:44.080 14:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.080 14:15:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73124 00:15:44.080 14:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73124 ']' 00:15:44.080 14:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73124 00:15:44.080 14:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:44.080 14:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.080 14:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73124 00:15:44.340 14:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.340 14:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.340 14:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73124' 00:15:44.340 killing process with pid 73124 00:15:44.340 14:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73124 00:15:44.340 [2024-11-27 14:15:15.053230] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.340 14:15:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73124 00:15:44.600 [2024-11-27 14:15:15.395842] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:46.025 14:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZOc4oWzdwc 00:15:46.025 14:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:46.025 14:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:46.025 14:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:15:46.025 14:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:46.025 14:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:46.025 14:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:46.025 14:15:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:15:46.025 00:15:46.025 real 0m4.931s 00:15:46.025 user 0m5.831s 00:15:46.025 sys 0m0.589s 00:15:46.025 14:15:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.025 14:15:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.025 ************************************ 00:15:46.025 END TEST raid_read_error_test 00:15:46.025 ************************************ 00:15:46.025 14:15:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:46.025 14:15:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:46.025 14:15:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.025 14:15:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:46.025 ************************************ 00:15:46.025 START TEST raid_write_error_test 00:15:46.025 ************************************ 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3MHSZQGPDx 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73275 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73275 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73275 ']' 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.025 14:15:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.025 [2024-11-27 14:15:16.866857] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:46.025 [2024-11-27 14:15:16.867086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73275 ] 00:15:46.298 [2024-11-27 14:15:17.032350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.298 [2024-11-27 14:15:17.164895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.559 [2024-11-27 14:15:17.389230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.559 [2024-11-27 14:15:17.389315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.819 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.819 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:46.819 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:46.819 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:46.819 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.819 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.078 BaseBdev1_malloc 00:15:47.078 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.078 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:47.078 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.078 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.078 true 00:15:47.078 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.078 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:47.078 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.078 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.078 [2024-11-27 14:15:17.833821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:47.079 [2024-11-27 14:15:17.833882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.079 [2024-11-27 14:15:17.833903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:47.079 [2024-11-27 14:15:17.833915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.079 [2024-11-27 14:15:17.836157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.079 [2024-11-27 14:15:17.836200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:47.079 BaseBdev1 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.079 BaseBdev2_malloc 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.079 true 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.079 [2024-11-27 14:15:17.905332] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:47.079 [2024-11-27 14:15:17.905452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.079 [2024-11-27 14:15:17.905477] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:47.079 [2024-11-27 14:15:17.905489] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.079 [2024-11-27 14:15:17.907803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.079 [2024-11-27 14:15:17.907849] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:47.079 BaseBdev2 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.079 BaseBdev3_malloc 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.079 true 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.079 [2024-11-27 14:15:17.992760] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:47.079 [2024-11-27 14:15:17.992882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.079 [2024-11-27 14:15:17.992909] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:47.079 [2024-11-27 14:15:17.992922] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.079 [2024-11-27 14:15:17.995320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.079 [2024-11-27 14:15:17.995359] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:47.079 BaseBdev3 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.079 14:15:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.339 BaseBdev4_malloc 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.339 true 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.339 [2024-11-27 14:15:18.066397] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:47.339 [2024-11-27 14:15:18.066461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.339 [2024-11-27 14:15:18.066499] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:47.339 [2024-11-27 14:15:18.066511] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.339 [2024-11-27 14:15:18.068897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.339 [2024-11-27 14:15:18.068945] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:47.339 BaseBdev4 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.339 [2024-11-27 14:15:18.078451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.339 [2024-11-27 14:15:18.080454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.339 [2024-11-27 14:15:18.080541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:47.339 [2024-11-27 14:15:18.080614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:47.339 [2024-11-27 14:15:18.080880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:47.339 [2024-11-27 14:15:18.080897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:47.339 [2024-11-27 14:15:18.081218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:47.339 [2024-11-27 14:15:18.081412] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:47.339 [2024-11-27 14:15:18.081432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:47.339 [2024-11-27 14:15:18.081643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.339 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.340 "name": "raid_bdev1", 00:15:47.340 "uuid": "fb638975-15e2-4082-a98c-4d206fc2d4b7", 00:15:47.340 "strip_size_kb": 64, 00:15:47.340 "state": "online", 00:15:47.340 "raid_level": "concat", 00:15:47.340 "superblock": true, 00:15:47.340 "num_base_bdevs": 4, 00:15:47.340 "num_base_bdevs_discovered": 4, 00:15:47.340 "num_base_bdevs_operational": 4, 00:15:47.340 "base_bdevs_list": [ 00:15:47.340 { 00:15:47.340 "name": "BaseBdev1", 00:15:47.340 "uuid": "7abba3f9-1b19-5ea9-acdb-838a942829b5", 00:15:47.340 "is_configured": true, 00:15:47.340 "data_offset": 2048, 00:15:47.340 "data_size": 63488 00:15:47.340 }, 00:15:47.340 { 00:15:47.340 "name": "BaseBdev2", 00:15:47.340 "uuid": "31df3543-a305-5d15-ab7a-f1a79d641c86", 00:15:47.340 "is_configured": true, 00:15:47.340 "data_offset": 2048, 00:15:47.340 "data_size": 63488 00:15:47.340 }, 00:15:47.340 { 00:15:47.340 "name": "BaseBdev3", 00:15:47.340 "uuid": "d9c778b9-c1e6-5b21-9b9c-557bd755c7cd", 00:15:47.340 "is_configured": true, 00:15:47.340 "data_offset": 2048, 00:15:47.340 "data_size": 63488 00:15:47.340 }, 00:15:47.340 { 00:15:47.340 "name": "BaseBdev4", 00:15:47.340 "uuid": "9385f521-882c-5187-8fa0-1e556acdc87b", 00:15:47.340 "is_configured": true, 00:15:47.340 "data_offset": 2048, 00:15:47.340 "data_size": 63488 00:15:47.340 } 00:15:47.340 ] 00:15:47.340 }' 00:15:47.340 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.340 14:15:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.909 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:47.909 14:15:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:47.909 [2024-11-27 14:15:18.678798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.850 "name": "raid_bdev1", 00:15:48.850 "uuid": "fb638975-15e2-4082-a98c-4d206fc2d4b7", 00:15:48.850 "strip_size_kb": 64, 00:15:48.850 "state": "online", 00:15:48.850 "raid_level": "concat", 00:15:48.850 "superblock": true, 00:15:48.850 "num_base_bdevs": 4, 00:15:48.850 "num_base_bdevs_discovered": 4, 00:15:48.850 "num_base_bdevs_operational": 4, 00:15:48.850 "base_bdevs_list": [ 00:15:48.850 { 00:15:48.850 "name": "BaseBdev1", 00:15:48.850 "uuid": "7abba3f9-1b19-5ea9-acdb-838a942829b5", 00:15:48.850 "is_configured": true, 00:15:48.850 "data_offset": 2048, 00:15:48.850 "data_size": 63488 00:15:48.850 }, 00:15:48.850 { 00:15:48.850 "name": "BaseBdev2", 00:15:48.850 "uuid": "31df3543-a305-5d15-ab7a-f1a79d641c86", 00:15:48.850 "is_configured": true, 00:15:48.850 "data_offset": 2048, 00:15:48.850 "data_size": 63488 00:15:48.850 }, 00:15:48.850 { 00:15:48.850 "name": "BaseBdev3", 00:15:48.850 "uuid": "d9c778b9-c1e6-5b21-9b9c-557bd755c7cd", 00:15:48.850 "is_configured": true, 00:15:48.850 "data_offset": 2048, 00:15:48.850 "data_size": 63488 00:15:48.850 }, 00:15:48.850 { 00:15:48.850 "name": "BaseBdev4", 00:15:48.850 "uuid": "9385f521-882c-5187-8fa0-1e556acdc87b", 00:15:48.850 "is_configured": true, 00:15:48.850 "data_offset": 2048, 00:15:48.850 "data_size": 63488 00:15:48.850 } 00:15:48.850 ] 00:15:48.850 }' 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.850 14:15:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.420 14:15:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.420 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.420 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.420 [2024-11-27 14:15:20.076968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.420 [2024-11-27 14:15:20.077010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.420 [2024-11-27 14:15:20.080400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.420 [2024-11-27 14:15:20.080483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.420 [2024-11-27 14:15:20.080543] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.420 [2024-11-27 14:15:20.080563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:49.420 { 00:15:49.420 "results": [ 00:15:49.420 { 00:15:49.420 "job": "raid_bdev1", 00:15:49.420 "core_mask": "0x1", 00:15:49.420 "workload": "randrw", 00:15:49.420 "percentage": 50, 00:15:49.420 "status": "finished", 00:15:49.420 "queue_depth": 1, 00:15:49.420 "io_size": 131072, 00:15:49.420 "runtime": 1.398564, 00:15:49.420 "iops": 12767.38139977863, 00:15:49.420 "mibps": 1595.9226749723287, 00:15:49.420 "io_failed": 1, 00:15:49.420 "io_timeout": 0, 00:15:49.420 "avg_latency_us": 108.26360863463329, 00:15:49.420 "min_latency_us": 28.841921397379913, 00:15:49.420 "max_latency_us": 1774.3371179039302 00:15:49.420 } 00:15:49.420 ], 00:15:49.421 "core_count": 1 00:15:49.421 } 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73275 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73275 ']' 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73275 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73275 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:49.421 killing process with pid 73275 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73275' 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73275 00:15:49.421 [2024-11-27 14:15:20.115356] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:49.421 14:15:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73275 00:15:49.681 [2024-11-27 14:15:20.495913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:51.077 14:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:51.077 14:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3MHSZQGPDx 00:15:51.077 14:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:51.077 14:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:15:51.077 14:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:51.077 ************************************ 00:15:51.077 END TEST raid_write_error_test 00:15:51.077 ************************************ 00:15:51.077 14:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:51.077 14:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:51.077 14:15:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:15:51.077 00:15:51.077 real 0m5.155s 00:15:51.077 user 0m6.139s 00:15:51.077 sys 0m0.608s 00:15:51.077 14:15:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.077 14:15:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.077 14:15:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:51.077 14:15:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:51.077 14:15:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:51.077 14:15:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.077 14:15:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:51.077 ************************************ 00:15:51.077 START TEST raid_state_function_test 00:15:51.077 ************************************ 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73419 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73419' 00:15:51.077 Process raid pid: 73419 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73419 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73419 ']' 00:15:51.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.077 14:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:51.078 14:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.078 14:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:51.078 14:15:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.338 [2024-11-27 14:15:22.077349] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:15:51.338 [2024-11-27 14:15:22.077483] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.338 [2024-11-27 14:15:22.270792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.598 [2024-11-27 14:15:22.410410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.858 [2024-11-27 14:15:22.651750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.858 [2024-11-27 14:15:22.651805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.118 [2024-11-27 14:15:23.009212] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.118 [2024-11-27 14:15:23.009285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.118 [2024-11-27 14:15:23.009305] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.118 [2024-11-27 14:15:23.009317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.118 [2024-11-27 14:15:23.009325] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:52.118 [2024-11-27 14:15:23.009336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:52.118 [2024-11-27 14:15:23.009343] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:52.118 [2024-11-27 14:15:23.009353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.118 "name": "Existed_Raid", 00:15:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.118 "strip_size_kb": 0, 00:15:52.118 "state": "configuring", 00:15:52.118 "raid_level": "raid1", 00:15:52.118 "superblock": false, 00:15:52.118 "num_base_bdevs": 4, 00:15:52.118 "num_base_bdevs_discovered": 0, 00:15:52.118 "num_base_bdevs_operational": 4, 00:15:52.118 "base_bdevs_list": [ 00:15:52.118 { 00:15:52.118 "name": "BaseBdev1", 00:15:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.118 "is_configured": false, 00:15:52.118 "data_offset": 0, 00:15:52.118 "data_size": 0 00:15:52.118 }, 00:15:52.118 { 00:15:52.118 "name": "BaseBdev2", 00:15:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.118 "is_configured": false, 00:15:52.118 "data_offset": 0, 00:15:52.118 "data_size": 0 00:15:52.118 }, 00:15:52.118 { 00:15:52.118 "name": "BaseBdev3", 00:15:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.118 "is_configured": false, 00:15:52.118 "data_offset": 0, 00:15:52.118 "data_size": 0 00:15:52.118 }, 00:15:52.118 { 00:15:52.118 "name": "BaseBdev4", 00:15:52.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.118 "is_configured": false, 00:15:52.118 "data_offset": 0, 00:15:52.118 "data_size": 0 00:15:52.118 } 00:15:52.118 ] 00:15:52.118 }' 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.118 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.688 [2024-11-27 14:15:23.508302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.688 [2024-11-27 14:15:23.508429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.688 [2024-11-27 14:15:23.520290] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.688 [2024-11-27 14:15:23.520380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.688 [2024-11-27 14:15:23.520433] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.688 [2024-11-27 14:15:23.520485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.688 [2024-11-27 14:15:23.520517] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:52.688 [2024-11-27 14:15:23.520551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:52.688 [2024-11-27 14:15:23.520582] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:52.688 [2024-11-27 14:15:23.520612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.688 [2024-11-27 14:15:23.574395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.688 BaseBdev1 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.688 [ 00:15:52.688 { 00:15:52.688 "name": "BaseBdev1", 00:15:52.688 "aliases": [ 00:15:52.688 "8d3cefb3-80ce-4ca7-91a2-1328517e4938" 00:15:52.688 ], 00:15:52.688 "product_name": "Malloc disk", 00:15:52.688 "block_size": 512, 00:15:52.688 "num_blocks": 65536, 00:15:52.688 "uuid": "8d3cefb3-80ce-4ca7-91a2-1328517e4938", 00:15:52.688 "assigned_rate_limits": { 00:15:52.688 "rw_ios_per_sec": 0, 00:15:52.688 "rw_mbytes_per_sec": 0, 00:15:52.688 "r_mbytes_per_sec": 0, 00:15:52.688 "w_mbytes_per_sec": 0 00:15:52.688 }, 00:15:52.688 "claimed": true, 00:15:52.688 "claim_type": "exclusive_write", 00:15:52.688 "zoned": false, 00:15:52.688 "supported_io_types": { 00:15:52.688 "read": true, 00:15:52.688 "write": true, 00:15:52.688 "unmap": true, 00:15:52.688 "flush": true, 00:15:52.688 "reset": true, 00:15:52.688 "nvme_admin": false, 00:15:52.688 "nvme_io": false, 00:15:52.688 "nvme_io_md": false, 00:15:52.688 "write_zeroes": true, 00:15:52.688 "zcopy": true, 00:15:52.688 "get_zone_info": false, 00:15:52.688 "zone_management": false, 00:15:52.688 "zone_append": false, 00:15:52.688 "compare": false, 00:15:52.688 "compare_and_write": false, 00:15:52.688 "abort": true, 00:15:52.688 "seek_hole": false, 00:15:52.688 "seek_data": false, 00:15:52.688 "copy": true, 00:15:52.688 "nvme_iov_md": false 00:15:52.688 }, 00:15:52.688 "memory_domains": [ 00:15:52.688 { 00:15:52.688 "dma_device_id": "system", 00:15:52.688 "dma_device_type": 1 00:15:52.688 }, 00:15:52.688 { 00:15:52.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.688 "dma_device_type": 2 00:15:52.688 } 00:15:52.688 ], 00:15:52.688 "driver_specific": {} 00:15:52.688 } 00:15:52.688 ] 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.688 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.689 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.689 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.689 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.689 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.689 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.689 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.689 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.689 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.949 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.949 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.949 "name": "Existed_Raid", 00:15:52.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.949 "strip_size_kb": 0, 00:15:52.949 "state": "configuring", 00:15:52.949 "raid_level": "raid1", 00:15:52.949 "superblock": false, 00:15:52.949 "num_base_bdevs": 4, 00:15:52.950 "num_base_bdevs_discovered": 1, 00:15:52.950 "num_base_bdevs_operational": 4, 00:15:52.950 "base_bdevs_list": [ 00:15:52.950 { 00:15:52.950 "name": "BaseBdev1", 00:15:52.950 "uuid": "8d3cefb3-80ce-4ca7-91a2-1328517e4938", 00:15:52.950 "is_configured": true, 00:15:52.950 "data_offset": 0, 00:15:52.950 "data_size": 65536 00:15:52.950 }, 00:15:52.950 { 00:15:52.950 "name": "BaseBdev2", 00:15:52.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.950 "is_configured": false, 00:15:52.950 "data_offset": 0, 00:15:52.950 "data_size": 0 00:15:52.950 }, 00:15:52.950 { 00:15:52.950 "name": "BaseBdev3", 00:15:52.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.950 "is_configured": false, 00:15:52.950 "data_offset": 0, 00:15:52.950 "data_size": 0 00:15:52.950 }, 00:15:52.950 { 00:15:52.950 "name": "BaseBdev4", 00:15:52.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.950 "is_configured": false, 00:15:52.950 "data_offset": 0, 00:15:52.950 "data_size": 0 00:15:52.950 } 00:15:52.950 ] 00:15:52.950 }' 00:15:52.950 14:15:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.950 14:15:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.210 [2024-11-27 14:15:24.089608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.210 [2024-11-27 14:15:24.089740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.210 [2024-11-27 14:15:24.101697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.210 [2024-11-27 14:15:24.103940] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.210 [2024-11-27 14:15:24.104058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.210 [2024-11-27 14:15:24.104100] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:53.210 [2024-11-27 14:15:24.104156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:53.210 [2024-11-27 14:15:24.104209] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:53.210 [2024-11-27 14:15:24.104242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.210 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.210 "name": "Existed_Raid", 00:15:53.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.210 "strip_size_kb": 0, 00:15:53.210 "state": "configuring", 00:15:53.210 "raid_level": "raid1", 00:15:53.210 "superblock": false, 00:15:53.210 "num_base_bdevs": 4, 00:15:53.210 "num_base_bdevs_discovered": 1, 00:15:53.211 "num_base_bdevs_operational": 4, 00:15:53.211 "base_bdevs_list": [ 00:15:53.211 { 00:15:53.211 "name": "BaseBdev1", 00:15:53.211 "uuid": "8d3cefb3-80ce-4ca7-91a2-1328517e4938", 00:15:53.211 "is_configured": true, 00:15:53.211 "data_offset": 0, 00:15:53.211 "data_size": 65536 00:15:53.211 }, 00:15:53.211 { 00:15:53.211 "name": "BaseBdev2", 00:15:53.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.211 "is_configured": false, 00:15:53.211 "data_offset": 0, 00:15:53.211 "data_size": 0 00:15:53.211 }, 00:15:53.211 { 00:15:53.211 "name": "BaseBdev3", 00:15:53.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.211 "is_configured": false, 00:15:53.211 "data_offset": 0, 00:15:53.211 "data_size": 0 00:15:53.211 }, 00:15:53.211 { 00:15:53.211 "name": "BaseBdev4", 00:15:53.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.211 "is_configured": false, 00:15:53.211 "data_offset": 0, 00:15:53.211 "data_size": 0 00:15:53.211 } 00:15:53.211 ] 00:15:53.211 }' 00:15:53.211 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.470 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.730 [2024-11-27 14:15:24.597815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.730 BaseBdev2 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.730 [ 00:15:53.730 { 00:15:53.730 "name": "BaseBdev2", 00:15:53.730 "aliases": [ 00:15:53.730 "871d04b2-7423-4b42-98d3-e1e7ed565b7c" 00:15:53.730 ], 00:15:53.730 "product_name": "Malloc disk", 00:15:53.730 "block_size": 512, 00:15:53.730 "num_blocks": 65536, 00:15:53.730 "uuid": "871d04b2-7423-4b42-98d3-e1e7ed565b7c", 00:15:53.730 "assigned_rate_limits": { 00:15:53.730 "rw_ios_per_sec": 0, 00:15:53.730 "rw_mbytes_per_sec": 0, 00:15:53.730 "r_mbytes_per_sec": 0, 00:15:53.730 "w_mbytes_per_sec": 0 00:15:53.730 }, 00:15:53.730 "claimed": true, 00:15:53.730 "claim_type": "exclusive_write", 00:15:53.730 "zoned": false, 00:15:53.730 "supported_io_types": { 00:15:53.730 "read": true, 00:15:53.730 "write": true, 00:15:53.730 "unmap": true, 00:15:53.730 "flush": true, 00:15:53.730 "reset": true, 00:15:53.730 "nvme_admin": false, 00:15:53.730 "nvme_io": false, 00:15:53.730 "nvme_io_md": false, 00:15:53.730 "write_zeroes": true, 00:15:53.730 "zcopy": true, 00:15:53.730 "get_zone_info": false, 00:15:53.730 "zone_management": false, 00:15:53.730 "zone_append": false, 00:15:53.730 "compare": false, 00:15:53.730 "compare_and_write": false, 00:15:53.730 "abort": true, 00:15:53.730 "seek_hole": false, 00:15:53.730 "seek_data": false, 00:15:53.730 "copy": true, 00:15:53.730 "nvme_iov_md": false 00:15:53.730 }, 00:15:53.730 "memory_domains": [ 00:15:53.730 { 00:15:53.730 "dma_device_id": "system", 00:15:53.730 "dma_device_type": 1 00:15:53.730 }, 00:15:53.730 { 00:15:53.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.730 "dma_device_type": 2 00:15:53.730 } 00:15:53.730 ], 00:15:53.730 "driver_specific": {} 00:15:53.730 } 00:15:53.730 ] 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.730 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.989 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.989 "name": "Existed_Raid", 00:15:53.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.989 "strip_size_kb": 0, 00:15:53.989 "state": "configuring", 00:15:53.989 "raid_level": "raid1", 00:15:53.990 "superblock": false, 00:15:53.990 "num_base_bdevs": 4, 00:15:53.990 "num_base_bdevs_discovered": 2, 00:15:53.990 "num_base_bdevs_operational": 4, 00:15:53.990 "base_bdevs_list": [ 00:15:53.990 { 00:15:53.990 "name": "BaseBdev1", 00:15:53.990 "uuid": "8d3cefb3-80ce-4ca7-91a2-1328517e4938", 00:15:53.990 "is_configured": true, 00:15:53.990 "data_offset": 0, 00:15:53.990 "data_size": 65536 00:15:53.990 }, 00:15:53.990 { 00:15:53.990 "name": "BaseBdev2", 00:15:53.990 "uuid": "871d04b2-7423-4b42-98d3-e1e7ed565b7c", 00:15:53.990 "is_configured": true, 00:15:53.990 "data_offset": 0, 00:15:53.990 "data_size": 65536 00:15:53.990 }, 00:15:53.990 { 00:15:53.990 "name": "BaseBdev3", 00:15:53.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.990 "is_configured": false, 00:15:53.990 "data_offset": 0, 00:15:53.990 "data_size": 0 00:15:53.990 }, 00:15:53.990 { 00:15:53.990 "name": "BaseBdev4", 00:15:53.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.990 "is_configured": false, 00:15:53.990 "data_offset": 0, 00:15:53.990 "data_size": 0 00:15:53.990 } 00:15:53.990 ] 00:15:53.990 }' 00:15:53.990 14:15:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.990 14:15:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.250 [2024-11-27 14:15:25.124195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.250 BaseBdev3 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.250 [ 00:15:54.250 { 00:15:54.250 "name": "BaseBdev3", 00:15:54.250 "aliases": [ 00:15:54.250 "e65dd554-a024-49e4-ad6c-cad6611bebb1" 00:15:54.250 ], 00:15:54.250 "product_name": "Malloc disk", 00:15:54.250 "block_size": 512, 00:15:54.250 "num_blocks": 65536, 00:15:54.250 "uuid": "e65dd554-a024-49e4-ad6c-cad6611bebb1", 00:15:54.250 "assigned_rate_limits": { 00:15:54.250 "rw_ios_per_sec": 0, 00:15:54.250 "rw_mbytes_per_sec": 0, 00:15:54.250 "r_mbytes_per_sec": 0, 00:15:54.250 "w_mbytes_per_sec": 0 00:15:54.250 }, 00:15:54.250 "claimed": true, 00:15:54.250 "claim_type": "exclusive_write", 00:15:54.250 "zoned": false, 00:15:54.250 "supported_io_types": { 00:15:54.250 "read": true, 00:15:54.250 "write": true, 00:15:54.250 "unmap": true, 00:15:54.250 "flush": true, 00:15:54.250 "reset": true, 00:15:54.250 "nvme_admin": false, 00:15:54.250 "nvme_io": false, 00:15:54.250 "nvme_io_md": false, 00:15:54.250 "write_zeroes": true, 00:15:54.250 "zcopy": true, 00:15:54.250 "get_zone_info": false, 00:15:54.250 "zone_management": false, 00:15:54.250 "zone_append": false, 00:15:54.250 "compare": false, 00:15:54.250 "compare_and_write": false, 00:15:54.250 "abort": true, 00:15:54.250 "seek_hole": false, 00:15:54.250 "seek_data": false, 00:15:54.250 "copy": true, 00:15:54.250 "nvme_iov_md": false 00:15:54.250 }, 00:15:54.250 "memory_domains": [ 00:15:54.250 { 00:15:54.250 "dma_device_id": "system", 00:15:54.250 "dma_device_type": 1 00:15:54.250 }, 00:15:54.250 { 00:15:54.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.250 "dma_device_type": 2 00:15:54.250 } 00:15:54.250 ], 00:15:54.250 "driver_specific": {} 00:15:54.250 } 00:15:54.250 ] 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.250 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.510 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.510 "name": "Existed_Raid", 00:15:54.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.510 "strip_size_kb": 0, 00:15:54.510 "state": "configuring", 00:15:54.510 "raid_level": "raid1", 00:15:54.510 "superblock": false, 00:15:54.510 "num_base_bdevs": 4, 00:15:54.510 "num_base_bdevs_discovered": 3, 00:15:54.510 "num_base_bdevs_operational": 4, 00:15:54.510 "base_bdevs_list": [ 00:15:54.510 { 00:15:54.510 "name": "BaseBdev1", 00:15:54.510 "uuid": "8d3cefb3-80ce-4ca7-91a2-1328517e4938", 00:15:54.510 "is_configured": true, 00:15:54.510 "data_offset": 0, 00:15:54.510 "data_size": 65536 00:15:54.510 }, 00:15:54.510 { 00:15:54.510 "name": "BaseBdev2", 00:15:54.510 "uuid": "871d04b2-7423-4b42-98d3-e1e7ed565b7c", 00:15:54.510 "is_configured": true, 00:15:54.510 "data_offset": 0, 00:15:54.510 "data_size": 65536 00:15:54.510 }, 00:15:54.510 { 00:15:54.510 "name": "BaseBdev3", 00:15:54.510 "uuid": "e65dd554-a024-49e4-ad6c-cad6611bebb1", 00:15:54.510 "is_configured": true, 00:15:54.510 "data_offset": 0, 00:15:54.510 "data_size": 65536 00:15:54.510 }, 00:15:54.510 { 00:15:54.510 "name": "BaseBdev4", 00:15:54.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.510 "is_configured": false, 00:15:54.510 "data_offset": 0, 00:15:54.510 "data_size": 0 00:15:54.510 } 00:15:54.510 ] 00:15:54.510 }' 00:15:54.510 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.510 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.770 [2024-11-27 14:15:25.670852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:54.770 [2024-11-27 14:15:25.670920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:54.770 [2024-11-27 14:15:25.670928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:54.770 [2024-11-27 14:15:25.671244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:54.770 [2024-11-27 14:15:25.671436] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:54.770 [2024-11-27 14:15:25.671452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:54.770 [2024-11-27 14:15:25.671767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.770 BaseBdev4 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.770 [ 00:15:54.770 { 00:15:54.770 "name": "BaseBdev4", 00:15:54.770 "aliases": [ 00:15:54.770 "b7317c66-dac7-4bc6-8a95-72b2e958c4f2" 00:15:54.770 ], 00:15:54.770 "product_name": "Malloc disk", 00:15:54.770 "block_size": 512, 00:15:54.770 "num_blocks": 65536, 00:15:54.770 "uuid": "b7317c66-dac7-4bc6-8a95-72b2e958c4f2", 00:15:54.770 "assigned_rate_limits": { 00:15:54.770 "rw_ios_per_sec": 0, 00:15:54.770 "rw_mbytes_per_sec": 0, 00:15:54.770 "r_mbytes_per_sec": 0, 00:15:54.770 "w_mbytes_per_sec": 0 00:15:54.770 }, 00:15:54.770 "claimed": true, 00:15:54.770 "claim_type": "exclusive_write", 00:15:54.770 "zoned": false, 00:15:54.770 "supported_io_types": { 00:15:54.770 "read": true, 00:15:54.770 "write": true, 00:15:54.770 "unmap": true, 00:15:54.770 "flush": true, 00:15:54.770 "reset": true, 00:15:54.770 "nvme_admin": false, 00:15:54.770 "nvme_io": false, 00:15:54.770 "nvme_io_md": false, 00:15:54.770 "write_zeroes": true, 00:15:54.770 "zcopy": true, 00:15:54.770 "get_zone_info": false, 00:15:54.770 "zone_management": false, 00:15:54.770 "zone_append": false, 00:15:54.770 "compare": false, 00:15:54.770 "compare_and_write": false, 00:15:54.770 "abort": true, 00:15:54.770 "seek_hole": false, 00:15:54.770 "seek_data": false, 00:15:54.770 "copy": true, 00:15:54.770 "nvme_iov_md": false 00:15:54.770 }, 00:15:54.770 "memory_domains": [ 00:15:54.770 { 00:15:54.770 "dma_device_id": "system", 00:15:54.770 "dma_device_type": 1 00:15:54.770 }, 00:15:54.770 { 00:15:54.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.770 "dma_device_type": 2 00:15:54.770 } 00:15:54.770 ], 00:15:54.770 "driver_specific": {} 00:15:54.770 } 00:15:54.770 ] 00:15:54.770 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.771 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.030 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.030 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.030 "name": "Existed_Raid", 00:15:55.030 "uuid": "c177f426-2d90-470a-8b07-70d1e1b28a56", 00:15:55.030 "strip_size_kb": 0, 00:15:55.030 "state": "online", 00:15:55.030 "raid_level": "raid1", 00:15:55.030 "superblock": false, 00:15:55.030 "num_base_bdevs": 4, 00:15:55.030 "num_base_bdevs_discovered": 4, 00:15:55.030 "num_base_bdevs_operational": 4, 00:15:55.030 "base_bdevs_list": [ 00:15:55.030 { 00:15:55.030 "name": "BaseBdev1", 00:15:55.030 "uuid": "8d3cefb3-80ce-4ca7-91a2-1328517e4938", 00:15:55.030 "is_configured": true, 00:15:55.030 "data_offset": 0, 00:15:55.030 "data_size": 65536 00:15:55.030 }, 00:15:55.030 { 00:15:55.030 "name": "BaseBdev2", 00:15:55.030 "uuid": "871d04b2-7423-4b42-98d3-e1e7ed565b7c", 00:15:55.030 "is_configured": true, 00:15:55.030 "data_offset": 0, 00:15:55.030 "data_size": 65536 00:15:55.030 }, 00:15:55.030 { 00:15:55.030 "name": "BaseBdev3", 00:15:55.030 "uuid": "e65dd554-a024-49e4-ad6c-cad6611bebb1", 00:15:55.030 "is_configured": true, 00:15:55.030 "data_offset": 0, 00:15:55.030 "data_size": 65536 00:15:55.030 }, 00:15:55.030 { 00:15:55.030 "name": "BaseBdev4", 00:15:55.030 "uuid": "b7317c66-dac7-4bc6-8a95-72b2e958c4f2", 00:15:55.030 "is_configured": true, 00:15:55.030 "data_offset": 0, 00:15:55.030 "data_size": 65536 00:15:55.030 } 00:15:55.030 ] 00:15:55.030 }' 00:15:55.030 14:15:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.030 14:15:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.289 [2024-11-27 14:15:26.178408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.289 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:55.289 "name": "Existed_Raid", 00:15:55.289 "aliases": [ 00:15:55.289 "c177f426-2d90-470a-8b07-70d1e1b28a56" 00:15:55.289 ], 00:15:55.289 "product_name": "Raid Volume", 00:15:55.289 "block_size": 512, 00:15:55.289 "num_blocks": 65536, 00:15:55.289 "uuid": "c177f426-2d90-470a-8b07-70d1e1b28a56", 00:15:55.289 "assigned_rate_limits": { 00:15:55.289 "rw_ios_per_sec": 0, 00:15:55.289 "rw_mbytes_per_sec": 0, 00:15:55.289 "r_mbytes_per_sec": 0, 00:15:55.289 "w_mbytes_per_sec": 0 00:15:55.289 }, 00:15:55.289 "claimed": false, 00:15:55.289 "zoned": false, 00:15:55.289 "supported_io_types": { 00:15:55.289 "read": true, 00:15:55.289 "write": true, 00:15:55.289 "unmap": false, 00:15:55.289 "flush": false, 00:15:55.289 "reset": true, 00:15:55.289 "nvme_admin": false, 00:15:55.289 "nvme_io": false, 00:15:55.290 "nvme_io_md": false, 00:15:55.290 "write_zeroes": true, 00:15:55.290 "zcopy": false, 00:15:55.290 "get_zone_info": false, 00:15:55.290 "zone_management": false, 00:15:55.290 "zone_append": false, 00:15:55.290 "compare": false, 00:15:55.290 "compare_and_write": false, 00:15:55.290 "abort": false, 00:15:55.290 "seek_hole": false, 00:15:55.290 "seek_data": false, 00:15:55.290 "copy": false, 00:15:55.290 "nvme_iov_md": false 00:15:55.290 }, 00:15:55.290 "memory_domains": [ 00:15:55.290 { 00:15:55.290 "dma_device_id": "system", 00:15:55.290 "dma_device_type": 1 00:15:55.290 }, 00:15:55.290 { 00:15:55.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.290 "dma_device_type": 2 00:15:55.290 }, 00:15:55.290 { 00:15:55.290 "dma_device_id": "system", 00:15:55.290 "dma_device_type": 1 00:15:55.290 }, 00:15:55.290 { 00:15:55.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.290 "dma_device_type": 2 00:15:55.290 }, 00:15:55.290 { 00:15:55.290 "dma_device_id": "system", 00:15:55.290 "dma_device_type": 1 00:15:55.290 }, 00:15:55.290 { 00:15:55.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.290 "dma_device_type": 2 00:15:55.290 }, 00:15:55.290 { 00:15:55.290 "dma_device_id": "system", 00:15:55.290 "dma_device_type": 1 00:15:55.290 }, 00:15:55.290 { 00:15:55.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.290 "dma_device_type": 2 00:15:55.290 } 00:15:55.290 ], 00:15:55.290 "driver_specific": { 00:15:55.290 "raid": { 00:15:55.290 "uuid": "c177f426-2d90-470a-8b07-70d1e1b28a56", 00:15:55.290 "strip_size_kb": 0, 00:15:55.290 "state": "online", 00:15:55.290 "raid_level": "raid1", 00:15:55.290 "superblock": false, 00:15:55.290 "num_base_bdevs": 4, 00:15:55.290 "num_base_bdevs_discovered": 4, 00:15:55.290 "num_base_bdevs_operational": 4, 00:15:55.290 "base_bdevs_list": [ 00:15:55.290 { 00:15:55.290 "name": "BaseBdev1", 00:15:55.290 "uuid": "8d3cefb3-80ce-4ca7-91a2-1328517e4938", 00:15:55.290 "is_configured": true, 00:15:55.290 "data_offset": 0, 00:15:55.290 "data_size": 65536 00:15:55.290 }, 00:15:55.290 { 00:15:55.290 "name": "BaseBdev2", 00:15:55.290 "uuid": "871d04b2-7423-4b42-98d3-e1e7ed565b7c", 00:15:55.290 "is_configured": true, 00:15:55.290 "data_offset": 0, 00:15:55.290 "data_size": 65536 00:15:55.290 }, 00:15:55.290 { 00:15:55.290 "name": "BaseBdev3", 00:15:55.290 "uuid": "e65dd554-a024-49e4-ad6c-cad6611bebb1", 00:15:55.290 "is_configured": true, 00:15:55.290 "data_offset": 0, 00:15:55.290 "data_size": 65536 00:15:55.290 }, 00:15:55.290 { 00:15:55.290 "name": "BaseBdev4", 00:15:55.290 "uuid": "b7317c66-dac7-4bc6-8a95-72b2e958c4f2", 00:15:55.290 "is_configured": true, 00:15:55.290 "data_offset": 0, 00:15:55.290 "data_size": 65536 00:15:55.290 } 00:15:55.290 ] 00:15:55.290 } 00:15:55.290 } 00:15:55.290 }' 00:15:55.290 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:55.550 BaseBdev2 00:15:55.550 BaseBdev3 00:15:55.550 BaseBdev4' 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:55.550 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.551 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.551 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.810 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.810 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.810 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:55.810 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.811 [2024-11-27 14:15:26.517591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.811 "name": "Existed_Raid", 00:15:55.811 "uuid": "c177f426-2d90-470a-8b07-70d1e1b28a56", 00:15:55.811 "strip_size_kb": 0, 00:15:55.811 "state": "online", 00:15:55.811 "raid_level": "raid1", 00:15:55.811 "superblock": false, 00:15:55.811 "num_base_bdevs": 4, 00:15:55.811 "num_base_bdevs_discovered": 3, 00:15:55.811 "num_base_bdevs_operational": 3, 00:15:55.811 "base_bdevs_list": [ 00:15:55.811 { 00:15:55.811 "name": null, 00:15:55.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.811 "is_configured": false, 00:15:55.811 "data_offset": 0, 00:15:55.811 "data_size": 65536 00:15:55.811 }, 00:15:55.811 { 00:15:55.811 "name": "BaseBdev2", 00:15:55.811 "uuid": "871d04b2-7423-4b42-98d3-e1e7ed565b7c", 00:15:55.811 "is_configured": true, 00:15:55.811 "data_offset": 0, 00:15:55.811 "data_size": 65536 00:15:55.811 }, 00:15:55.811 { 00:15:55.811 "name": "BaseBdev3", 00:15:55.811 "uuid": "e65dd554-a024-49e4-ad6c-cad6611bebb1", 00:15:55.811 "is_configured": true, 00:15:55.811 "data_offset": 0, 00:15:55.811 "data_size": 65536 00:15:55.811 }, 00:15:55.811 { 00:15:55.811 "name": "BaseBdev4", 00:15:55.811 "uuid": "b7317c66-dac7-4bc6-8a95-72b2e958c4f2", 00:15:55.811 "is_configured": true, 00:15:55.811 "data_offset": 0, 00:15:55.811 "data_size": 65536 00:15:55.811 } 00:15:55.811 ] 00:15:55.811 }' 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.811 14:15:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.385 [2024-11-27 14:15:27.164806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.385 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.385 [2024-11-27 14:15:27.323882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:56.646 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.646 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:56.646 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.646 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.646 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:56.646 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.646 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.646 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.646 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:56.646 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.647 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:56.647 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.647 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.647 [2024-11-27 14:15:27.502117] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:56.647 [2024-11-27 14:15:27.502308] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.907 [2024-11-27 14:15:27.619134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.907 [2024-11-27 14:15:27.619281] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.907 [2024-11-27 14:15:27.619327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:56.907 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.907 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:56.907 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.908 BaseBdev2 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.908 [ 00:15:56.908 { 00:15:56.908 "name": "BaseBdev2", 00:15:56.908 "aliases": [ 00:15:56.908 "f3e8cede-21ba-4da7-a6ac-0f35938996c8" 00:15:56.908 ], 00:15:56.908 "product_name": "Malloc disk", 00:15:56.908 "block_size": 512, 00:15:56.908 "num_blocks": 65536, 00:15:56.908 "uuid": "f3e8cede-21ba-4da7-a6ac-0f35938996c8", 00:15:56.908 "assigned_rate_limits": { 00:15:56.908 "rw_ios_per_sec": 0, 00:15:56.908 "rw_mbytes_per_sec": 0, 00:15:56.908 "r_mbytes_per_sec": 0, 00:15:56.908 "w_mbytes_per_sec": 0 00:15:56.908 }, 00:15:56.908 "claimed": false, 00:15:56.908 "zoned": false, 00:15:56.908 "supported_io_types": { 00:15:56.908 "read": true, 00:15:56.908 "write": true, 00:15:56.908 "unmap": true, 00:15:56.908 "flush": true, 00:15:56.908 "reset": true, 00:15:56.908 "nvme_admin": false, 00:15:56.908 "nvme_io": false, 00:15:56.908 "nvme_io_md": false, 00:15:56.908 "write_zeroes": true, 00:15:56.908 "zcopy": true, 00:15:56.908 "get_zone_info": false, 00:15:56.908 "zone_management": false, 00:15:56.908 "zone_append": false, 00:15:56.908 "compare": false, 00:15:56.908 "compare_and_write": false, 00:15:56.908 "abort": true, 00:15:56.908 "seek_hole": false, 00:15:56.908 "seek_data": false, 00:15:56.908 "copy": true, 00:15:56.908 "nvme_iov_md": false 00:15:56.908 }, 00:15:56.908 "memory_domains": [ 00:15:56.908 { 00:15:56.908 "dma_device_id": "system", 00:15:56.908 "dma_device_type": 1 00:15:56.908 }, 00:15:56.908 { 00:15:56.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.908 "dma_device_type": 2 00:15:56.908 } 00:15:56.908 ], 00:15:56.908 "driver_specific": {} 00:15:56.908 } 00:15:56.908 ] 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.908 BaseBdev3 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.908 [ 00:15:56.908 { 00:15:56.908 "name": "BaseBdev3", 00:15:56.908 "aliases": [ 00:15:56.908 "c4b53d80-fd8e-417d-8467-3aa090b36b8b" 00:15:56.908 ], 00:15:56.908 "product_name": "Malloc disk", 00:15:56.908 "block_size": 512, 00:15:56.908 "num_blocks": 65536, 00:15:56.908 "uuid": "c4b53d80-fd8e-417d-8467-3aa090b36b8b", 00:15:56.908 "assigned_rate_limits": { 00:15:56.908 "rw_ios_per_sec": 0, 00:15:56.908 "rw_mbytes_per_sec": 0, 00:15:56.908 "r_mbytes_per_sec": 0, 00:15:56.908 "w_mbytes_per_sec": 0 00:15:56.908 }, 00:15:56.908 "claimed": false, 00:15:56.908 "zoned": false, 00:15:56.908 "supported_io_types": { 00:15:56.908 "read": true, 00:15:56.908 "write": true, 00:15:56.908 "unmap": true, 00:15:56.908 "flush": true, 00:15:56.908 "reset": true, 00:15:56.908 "nvme_admin": false, 00:15:56.908 "nvme_io": false, 00:15:56.908 "nvme_io_md": false, 00:15:56.908 "write_zeroes": true, 00:15:56.908 "zcopy": true, 00:15:56.908 "get_zone_info": false, 00:15:56.908 "zone_management": false, 00:15:56.908 "zone_append": false, 00:15:56.908 "compare": false, 00:15:56.908 "compare_and_write": false, 00:15:56.908 "abort": true, 00:15:56.908 "seek_hole": false, 00:15:56.908 "seek_data": false, 00:15:56.908 "copy": true, 00:15:56.908 "nvme_iov_md": false 00:15:56.908 }, 00:15:56.908 "memory_domains": [ 00:15:56.908 { 00:15:56.908 "dma_device_id": "system", 00:15:56.908 "dma_device_type": 1 00:15:56.908 }, 00:15:56.908 { 00:15:56.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.908 "dma_device_type": 2 00:15:56.908 } 00:15:56.908 ], 00:15:56.908 "driver_specific": {} 00:15:56.908 } 00:15:56.908 ] 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.908 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.169 BaseBdev4 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.169 [ 00:15:57.169 { 00:15:57.169 "name": "BaseBdev4", 00:15:57.169 "aliases": [ 00:15:57.169 "3c117bbd-3bc3-4ac2-ad0b-7c166398a712" 00:15:57.169 ], 00:15:57.169 "product_name": "Malloc disk", 00:15:57.169 "block_size": 512, 00:15:57.169 "num_blocks": 65536, 00:15:57.169 "uuid": "3c117bbd-3bc3-4ac2-ad0b-7c166398a712", 00:15:57.169 "assigned_rate_limits": { 00:15:57.169 "rw_ios_per_sec": 0, 00:15:57.169 "rw_mbytes_per_sec": 0, 00:15:57.169 "r_mbytes_per_sec": 0, 00:15:57.169 "w_mbytes_per_sec": 0 00:15:57.169 }, 00:15:57.169 "claimed": false, 00:15:57.169 "zoned": false, 00:15:57.169 "supported_io_types": { 00:15:57.169 "read": true, 00:15:57.169 "write": true, 00:15:57.169 "unmap": true, 00:15:57.169 "flush": true, 00:15:57.169 "reset": true, 00:15:57.169 "nvme_admin": false, 00:15:57.169 "nvme_io": false, 00:15:57.169 "nvme_io_md": false, 00:15:57.169 "write_zeroes": true, 00:15:57.169 "zcopy": true, 00:15:57.169 "get_zone_info": false, 00:15:57.169 "zone_management": false, 00:15:57.169 "zone_append": false, 00:15:57.169 "compare": false, 00:15:57.169 "compare_and_write": false, 00:15:57.169 "abort": true, 00:15:57.169 "seek_hole": false, 00:15:57.169 "seek_data": false, 00:15:57.169 "copy": true, 00:15:57.169 "nvme_iov_md": false 00:15:57.169 }, 00:15:57.169 "memory_domains": [ 00:15:57.169 { 00:15:57.169 "dma_device_id": "system", 00:15:57.169 "dma_device_type": 1 00:15:57.169 }, 00:15:57.169 { 00:15:57.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.169 "dma_device_type": 2 00:15:57.169 } 00:15:57.169 ], 00:15:57.169 "driver_specific": {} 00:15:57.169 } 00:15:57.169 ] 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.169 [2024-11-27 14:15:27.944524] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.169 [2024-11-27 14:15:27.944631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.169 [2024-11-27 14:15:27.944689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.169 [2024-11-27 14:15:27.946863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.169 [2024-11-27 14:15:27.946968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.169 14:15:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.169 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.169 "name": "Existed_Raid", 00:15:57.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.169 "strip_size_kb": 0, 00:15:57.169 "state": "configuring", 00:15:57.169 "raid_level": "raid1", 00:15:57.169 "superblock": false, 00:15:57.169 "num_base_bdevs": 4, 00:15:57.170 "num_base_bdevs_discovered": 3, 00:15:57.170 "num_base_bdevs_operational": 4, 00:15:57.170 "base_bdevs_list": [ 00:15:57.170 { 00:15:57.170 "name": "BaseBdev1", 00:15:57.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.170 "is_configured": false, 00:15:57.170 "data_offset": 0, 00:15:57.170 "data_size": 0 00:15:57.170 }, 00:15:57.170 { 00:15:57.170 "name": "BaseBdev2", 00:15:57.170 "uuid": "f3e8cede-21ba-4da7-a6ac-0f35938996c8", 00:15:57.170 "is_configured": true, 00:15:57.170 "data_offset": 0, 00:15:57.170 "data_size": 65536 00:15:57.170 }, 00:15:57.170 { 00:15:57.170 "name": "BaseBdev3", 00:15:57.170 "uuid": "c4b53d80-fd8e-417d-8467-3aa090b36b8b", 00:15:57.170 "is_configured": true, 00:15:57.170 "data_offset": 0, 00:15:57.170 "data_size": 65536 00:15:57.170 }, 00:15:57.170 { 00:15:57.170 "name": "BaseBdev4", 00:15:57.170 "uuid": "3c117bbd-3bc3-4ac2-ad0b-7c166398a712", 00:15:57.170 "is_configured": true, 00:15:57.170 "data_offset": 0, 00:15:57.170 "data_size": 65536 00:15:57.170 } 00:15:57.170 ] 00:15:57.170 }' 00:15:57.170 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.170 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.739 [2024-11-27 14:15:28.467728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.739 "name": "Existed_Raid", 00:15:57.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.739 "strip_size_kb": 0, 00:15:57.739 "state": "configuring", 00:15:57.739 "raid_level": "raid1", 00:15:57.739 "superblock": false, 00:15:57.739 "num_base_bdevs": 4, 00:15:57.739 "num_base_bdevs_discovered": 2, 00:15:57.739 "num_base_bdevs_operational": 4, 00:15:57.739 "base_bdevs_list": [ 00:15:57.739 { 00:15:57.739 "name": "BaseBdev1", 00:15:57.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.739 "is_configured": false, 00:15:57.739 "data_offset": 0, 00:15:57.739 "data_size": 0 00:15:57.739 }, 00:15:57.739 { 00:15:57.739 "name": null, 00:15:57.739 "uuid": "f3e8cede-21ba-4da7-a6ac-0f35938996c8", 00:15:57.739 "is_configured": false, 00:15:57.739 "data_offset": 0, 00:15:57.739 "data_size": 65536 00:15:57.739 }, 00:15:57.739 { 00:15:57.739 "name": "BaseBdev3", 00:15:57.739 "uuid": "c4b53d80-fd8e-417d-8467-3aa090b36b8b", 00:15:57.739 "is_configured": true, 00:15:57.739 "data_offset": 0, 00:15:57.739 "data_size": 65536 00:15:57.739 }, 00:15:57.739 { 00:15:57.739 "name": "BaseBdev4", 00:15:57.739 "uuid": "3c117bbd-3bc3-4ac2-ad0b-7c166398a712", 00:15:57.739 "is_configured": true, 00:15:57.739 "data_offset": 0, 00:15:57.739 "data_size": 65536 00:15:57.739 } 00:15:57.739 ] 00:15:57.739 }' 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.739 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.999 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.999 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.999 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.999 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:57.999 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.258 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:58.258 14:15:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:58.258 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.258 14:15:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.258 [2024-11-27 14:15:29.022108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.258 BaseBdev1 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.258 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.258 [ 00:15:58.258 { 00:15:58.258 "name": "BaseBdev1", 00:15:58.258 "aliases": [ 00:15:58.258 "02f33ef2-39e8-445a-b94e-cb0b9517bbc3" 00:15:58.258 ], 00:15:58.258 "product_name": "Malloc disk", 00:15:58.258 "block_size": 512, 00:15:58.258 "num_blocks": 65536, 00:15:58.258 "uuid": "02f33ef2-39e8-445a-b94e-cb0b9517bbc3", 00:15:58.258 "assigned_rate_limits": { 00:15:58.258 "rw_ios_per_sec": 0, 00:15:58.258 "rw_mbytes_per_sec": 0, 00:15:58.258 "r_mbytes_per_sec": 0, 00:15:58.258 "w_mbytes_per_sec": 0 00:15:58.258 }, 00:15:58.258 "claimed": true, 00:15:58.258 "claim_type": "exclusive_write", 00:15:58.258 "zoned": false, 00:15:58.258 "supported_io_types": { 00:15:58.259 "read": true, 00:15:58.259 "write": true, 00:15:58.259 "unmap": true, 00:15:58.259 "flush": true, 00:15:58.259 "reset": true, 00:15:58.259 "nvme_admin": false, 00:15:58.259 "nvme_io": false, 00:15:58.259 "nvme_io_md": false, 00:15:58.259 "write_zeroes": true, 00:15:58.259 "zcopy": true, 00:15:58.259 "get_zone_info": false, 00:15:58.259 "zone_management": false, 00:15:58.259 "zone_append": false, 00:15:58.259 "compare": false, 00:15:58.259 "compare_and_write": false, 00:15:58.259 "abort": true, 00:15:58.259 "seek_hole": false, 00:15:58.259 "seek_data": false, 00:15:58.259 "copy": true, 00:15:58.259 "nvme_iov_md": false 00:15:58.259 }, 00:15:58.259 "memory_domains": [ 00:15:58.259 { 00:15:58.259 "dma_device_id": "system", 00:15:58.259 "dma_device_type": 1 00:15:58.259 }, 00:15:58.259 { 00:15:58.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.259 "dma_device_type": 2 00:15:58.259 } 00:15:58.259 ], 00:15:58.259 "driver_specific": {} 00:15:58.259 } 00:15:58.259 ] 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.259 "name": "Existed_Raid", 00:15:58.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.259 "strip_size_kb": 0, 00:15:58.259 "state": "configuring", 00:15:58.259 "raid_level": "raid1", 00:15:58.259 "superblock": false, 00:15:58.259 "num_base_bdevs": 4, 00:15:58.259 "num_base_bdevs_discovered": 3, 00:15:58.259 "num_base_bdevs_operational": 4, 00:15:58.259 "base_bdevs_list": [ 00:15:58.259 { 00:15:58.259 "name": "BaseBdev1", 00:15:58.259 "uuid": "02f33ef2-39e8-445a-b94e-cb0b9517bbc3", 00:15:58.259 "is_configured": true, 00:15:58.259 "data_offset": 0, 00:15:58.259 "data_size": 65536 00:15:58.259 }, 00:15:58.259 { 00:15:58.259 "name": null, 00:15:58.259 "uuid": "f3e8cede-21ba-4da7-a6ac-0f35938996c8", 00:15:58.259 "is_configured": false, 00:15:58.259 "data_offset": 0, 00:15:58.259 "data_size": 65536 00:15:58.259 }, 00:15:58.259 { 00:15:58.259 "name": "BaseBdev3", 00:15:58.259 "uuid": "c4b53d80-fd8e-417d-8467-3aa090b36b8b", 00:15:58.259 "is_configured": true, 00:15:58.259 "data_offset": 0, 00:15:58.259 "data_size": 65536 00:15:58.259 }, 00:15:58.259 { 00:15:58.259 "name": "BaseBdev4", 00:15:58.259 "uuid": "3c117bbd-3bc3-4ac2-ad0b-7c166398a712", 00:15:58.259 "is_configured": true, 00:15:58.259 "data_offset": 0, 00:15:58.259 "data_size": 65536 00:15:58.259 } 00:15:58.259 ] 00:15:58.259 }' 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.259 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.826 [2024-11-27 14:15:29.613250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.826 "name": "Existed_Raid", 00:15:58.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.826 "strip_size_kb": 0, 00:15:58.826 "state": "configuring", 00:15:58.826 "raid_level": "raid1", 00:15:58.826 "superblock": false, 00:15:58.826 "num_base_bdevs": 4, 00:15:58.826 "num_base_bdevs_discovered": 2, 00:15:58.826 "num_base_bdevs_operational": 4, 00:15:58.826 "base_bdevs_list": [ 00:15:58.826 { 00:15:58.826 "name": "BaseBdev1", 00:15:58.826 "uuid": "02f33ef2-39e8-445a-b94e-cb0b9517bbc3", 00:15:58.826 "is_configured": true, 00:15:58.826 "data_offset": 0, 00:15:58.826 "data_size": 65536 00:15:58.826 }, 00:15:58.826 { 00:15:58.826 "name": null, 00:15:58.826 "uuid": "f3e8cede-21ba-4da7-a6ac-0f35938996c8", 00:15:58.826 "is_configured": false, 00:15:58.826 "data_offset": 0, 00:15:58.826 "data_size": 65536 00:15:58.826 }, 00:15:58.826 { 00:15:58.826 "name": null, 00:15:58.826 "uuid": "c4b53d80-fd8e-417d-8467-3aa090b36b8b", 00:15:58.826 "is_configured": false, 00:15:58.826 "data_offset": 0, 00:15:58.826 "data_size": 65536 00:15:58.826 }, 00:15:58.826 { 00:15:58.826 "name": "BaseBdev4", 00:15:58.826 "uuid": "3c117bbd-3bc3-4ac2-ad0b-7c166398a712", 00:15:58.826 "is_configured": true, 00:15:58.826 "data_offset": 0, 00:15:58.826 "data_size": 65536 00:15:58.826 } 00:15:58.826 ] 00:15:58.826 }' 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.826 14:15:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.396 [2024-11-27 14:15:30.104382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.396 "name": "Existed_Raid", 00:15:59.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.396 "strip_size_kb": 0, 00:15:59.396 "state": "configuring", 00:15:59.396 "raid_level": "raid1", 00:15:59.396 "superblock": false, 00:15:59.396 "num_base_bdevs": 4, 00:15:59.396 "num_base_bdevs_discovered": 3, 00:15:59.396 "num_base_bdevs_operational": 4, 00:15:59.396 "base_bdevs_list": [ 00:15:59.396 { 00:15:59.396 "name": "BaseBdev1", 00:15:59.396 "uuid": "02f33ef2-39e8-445a-b94e-cb0b9517bbc3", 00:15:59.396 "is_configured": true, 00:15:59.396 "data_offset": 0, 00:15:59.396 "data_size": 65536 00:15:59.396 }, 00:15:59.396 { 00:15:59.396 "name": null, 00:15:59.396 "uuid": "f3e8cede-21ba-4da7-a6ac-0f35938996c8", 00:15:59.396 "is_configured": false, 00:15:59.396 "data_offset": 0, 00:15:59.396 "data_size": 65536 00:15:59.396 }, 00:15:59.396 { 00:15:59.396 "name": "BaseBdev3", 00:15:59.396 "uuid": "c4b53d80-fd8e-417d-8467-3aa090b36b8b", 00:15:59.396 "is_configured": true, 00:15:59.396 "data_offset": 0, 00:15:59.396 "data_size": 65536 00:15:59.396 }, 00:15:59.396 { 00:15:59.396 "name": "BaseBdev4", 00:15:59.396 "uuid": "3c117bbd-3bc3-4ac2-ad0b-7c166398a712", 00:15:59.396 "is_configured": true, 00:15:59.396 "data_offset": 0, 00:15:59.396 "data_size": 65536 00:15:59.396 } 00:15:59.396 ] 00:15:59.396 }' 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.396 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.965 [2024-11-27 14:15:30.643734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.965 "name": "Existed_Raid", 00:15:59.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.965 "strip_size_kb": 0, 00:15:59.965 "state": "configuring", 00:15:59.965 "raid_level": "raid1", 00:15:59.965 "superblock": false, 00:15:59.965 "num_base_bdevs": 4, 00:15:59.965 "num_base_bdevs_discovered": 2, 00:15:59.965 "num_base_bdevs_operational": 4, 00:15:59.965 "base_bdevs_list": [ 00:15:59.965 { 00:15:59.965 "name": null, 00:15:59.965 "uuid": "02f33ef2-39e8-445a-b94e-cb0b9517bbc3", 00:15:59.965 "is_configured": false, 00:15:59.965 "data_offset": 0, 00:15:59.965 "data_size": 65536 00:15:59.965 }, 00:15:59.965 { 00:15:59.965 "name": null, 00:15:59.965 "uuid": "f3e8cede-21ba-4da7-a6ac-0f35938996c8", 00:15:59.965 "is_configured": false, 00:15:59.965 "data_offset": 0, 00:15:59.965 "data_size": 65536 00:15:59.965 }, 00:15:59.965 { 00:15:59.965 "name": "BaseBdev3", 00:15:59.965 "uuid": "c4b53d80-fd8e-417d-8467-3aa090b36b8b", 00:15:59.965 "is_configured": true, 00:15:59.965 "data_offset": 0, 00:15:59.965 "data_size": 65536 00:15:59.965 }, 00:15:59.965 { 00:15:59.965 "name": "BaseBdev4", 00:15:59.965 "uuid": "3c117bbd-3bc3-4ac2-ad0b-7c166398a712", 00:15:59.965 "is_configured": true, 00:15:59.965 "data_offset": 0, 00:15:59.965 "data_size": 65536 00:15:59.965 } 00:15:59.965 ] 00:15:59.965 }' 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.965 14:15:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.535 [2024-11-27 14:15:31.301871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.535 "name": "Existed_Raid", 00:16:00.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.535 "strip_size_kb": 0, 00:16:00.535 "state": "configuring", 00:16:00.535 "raid_level": "raid1", 00:16:00.535 "superblock": false, 00:16:00.535 "num_base_bdevs": 4, 00:16:00.535 "num_base_bdevs_discovered": 3, 00:16:00.535 "num_base_bdevs_operational": 4, 00:16:00.535 "base_bdevs_list": [ 00:16:00.535 { 00:16:00.535 "name": null, 00:16:00.535 "uuid": "02f33ef2-39e8-445a-b94e-cb0b9517bbc3", 00:16:00.535 "is_configured": false, 00:16:00.535 "data_offset": 0, 00:16:00.535 "data_size": 65536 00:16:00.535 }, 00:16:00.535 { 00:16:00.535 "name": "BaseBdev2", 00:16:00.535 "uuid": "f3e8cede-21ba-4da7-a6ac-0f35938996c8", 00:16:00.535 "is_configured": true, 00:16:00.535 "data_offset": 0, 00:16:00.535 "data_size": 65536 00:16:00.535 }, 00:16:00.535 { 00:16:00.535 "name": "BaseBdev3", 00:16:00.535 "uuid": "c4b53d80-fd8e-417d-8467-3aa090b36b8b", 00:16:00.535 "is_configured": true, 00:16:00.535 "data_offset": 0, 00:16:00.535 "data_size": 65536 00:16:00.535 }, 00:16:00.535 { 00:16:00.535 "name": "BaseBdev4", 00:16:00.535 "uuid": "3c117bbd-3bc3-4ac2-ad0b-7c166398a712", 00:16:00.535 "is_configured": true, 00:16:00.535 "data_offset": 0, 00:16:00.535 "data_size": 65536 00:16:00.535 } 00:16:00.535 ] 00:16:00.535 }' 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.535 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 02f33ef2-39e8-445a-b94e-cb0b9517bbc3 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.104 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.104 [2024-11-27 14:15:31.928603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:01.104 [2024-11-27 14:15:31.928751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:01.104 [2024-11-27 14:15:31.928785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:01.104 [2024-11-27 14:15:31.929168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:01.105 [2024-11-27 14:15:31.929416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:01.105 [2024-11-27 14:15:31.929473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:01.105 [2024-11-27 14:15:31.929799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.105 NewBaseBdev 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.105 [ 00:16:01.105 { 00:16:01.105 "name": "NewBaseBdev", 00:16:01.105 "aliases": [ 00:16:01.105 "02f33ef2-39e8-445a-b94e-cb0b9517bbc3" 00:16:01.105 ], 00:16:01.105 "product_name": "Malloc disk", 00:16:01.105 "block_size": 512, 00:16:01.105 "num_blocks": 65536, 00:16:01.105 "uuid": "02f33ef2-39e8-445a-b94e-cb0b9517bbc3", 00:16:01.105 "assigned_rate_limits": { 00:16:01.105 "rw_ios_per_sec": 0, 00:16:01.105 "rw_mbytes_per_sec": 0, 00:16:01.105 "r_mbytes_per_sec": 0, 00:16:01.105 "w_mbytes_per_sec": 0 00:16:01.105 }, 00:16:01.105 "claimed": true, 00:16:01.105 "claim_type": "exclusive_write", 00:16:01.105 "zoned": false, 00:16:01.105 "supported_io_types": { 00:16:01.105 "read": true, 00:16:01.105 "write": true, 00:16:01.105 "unmap": true, 00:16:01.105 "flush": true, 00:16:01.105 "reset": true, 00:16:01.105 "nvme_admin": false, 00:16:01.105 "nvme_io": false, 00:16:01.105 "nvme_io_md": false, 00:16:01.105 "write_zeroes": true, 00:16:01.105 "zcopy": true, 00:16:01.105 "get_zone_info": false, 00:16:01.105 "zone_management": false, 00:16:01.105 "zone_append": false, 00:16:01.105 "compare": false, 00:16:01.105 "compare_and_write": false, 00:16:01.105 "abort": true, 00:16:01.105 "seek_hole": false, 00:16:01.105 "seek_data": false, 00:16:01.105 "copy": true, 00:16:01.105 "nvme_iov_md": false 00:16:01.105 }, 00:16:01.105 "memory_domains": [ 00:16:01.105 { 00:16:01.105 "dma_device_id": "system", 00:16:01.105 "dma_device_type": 1 00:16:01.105 }, 00:16:01.105 { 00:16:01.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.105 "dma_device_type": 2 00:16:01.105 } 00:16:01.105 ], 00:16:01.105 "driver_specific": {} 00:16:01.105 } 00:16:01.105 ] 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.105 14:15:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.105 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.105 "name": "Existed_Raid", 00:16:01.105 "uuid": "60d274da-09cf-48df-a0cd-2c99862cf6e6", 00:16:01.105 "strip_size_kb": 0, 00:16:01.105 "state": "online", 00:16:01.105 "raid_level": "raid1", 00:16:01.105 "superblock": false, 00:16:01.105 "num_base_bdevs": 4, 00:16:01.105 "num_base_bdevs_discovered": 4, 00:16:01.105 "num_base_bdevs_operational": 4, 00:16:01.105 "base_bdevs_list": [ 00:16:01.105 { 00:16:01.105 "name": "NewBaseBdev", 00:16:01.105 "uuid": "02f33ef2-39e8-445a-b94e-cb0b9517bbc3", 00:16:01.105 "is_configured": true, 00:16:01.105 "data_offset": 0, 00:16:01.105 "data_size": 65536 00:16:01.105 }, 00:16:01.105 { 00:16:01.105 "name": "BaseBdev2", 00:16:01.105 "uuid": "f3e8cede-21ba-4da7-a6ac-0f35938996c8", 00:16:01.105 "is_configured": true, 00:16:01.105 "data_offset": 0, 00:16:01.105 "data_size": 65536 00:16:01.105 }, 00:16:01.105 { 00:16:01.105 "name": "BaseBdev3", 00:16:01.105 "uuid": "c4b53d80-fd8e-417d-8467-3aa090b36b8b", 00:16:01.105 "is_configured": true, 00:16:01.105 "data_offset": 0, 00:16:01.105 "data_size": 65536 00:16:01.105 }, 00:16:01.105 { 00:16:01.105 "name": "BaseBdev4", 00:16:01.105 "uuid": "3c117bbd-3bc3-4ac2-ad0b-7c166398a712", 00:16:01.105 "is_configured": true, 00:16:01.105 "data_offset": 0, 00:16:01.105 "data_size": 65536 00:16:01.105 } 00:16:01.105 ] 00:16:01.105 }' 00:16:01.105 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.105 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:01.680 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:01.680 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.681 [2024-11-27 14:15:32.456324] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:01.681 "name": "Existed_Raid", 00:16:01.681 "aliases": [ 00:16:01.681 "60d274da-09cf-48df-a0cd-2c99862cf6e6" 00:16:01.681 ], 00:16:01.681 "product_name": "Raid Volume", 00:16:01.681 "block_size": 512, 00:16:01.681 "num_blocks": 65536, 00:16:01.681 "uuid": "60d274da-09cf-48df-a0cd-2c99862cf6e6", 00:16:01.681 "assigned_rate_limits": { 00:16:01.681 "rw_ios_per_sec": 0, 00:16:01.681 "rw_mbytes_per_sec": 0, 00:16:01.681 "r_mbytes_per_sec": 0, 00:16:01.681 "w_mbytes_per_sec": 0 00:16:01.681 }, 00:16:01.681 "claimed": false, 00:16:01.681 "zoned": false, 00:16:01.681 "supported_io_types": { 00:16:01.681 "read": true, 00:16:01.681 "write": true, 00:16:01.681 "unmap": false, 00:16:01.681 "flush": false, 00:16:01.681 "reset": true, 00:16:01.681 "nvme_admin": false, 00:16:01.681 "nvme_io": false, 00:16:01.681 "nvme_io_md": false, 00:16:01.681 "write_zeroes": true, 00:16:01.681 "zcopy": false, 00:16:01.681 "get_zone_info": false, 00:16:01.681 "zone_management": false, 00:16:01.681 "zone_append": false, 00:16:01.681 "compare": false, 00:16:01.681 "compare_and_write": false, 00:16:01.681 "abort": false, 00:16:01.681 "seek_hole": false, 00:16:01.681 "seek_data": false, 00:16:01.681 "copy": false, 00:16:01.681 "nvme_iov_md": false 00:16:01.681 }, 00:16:01.681 "memory_domains": [ 00:16:01.681 { 00:16:01.681 "dma_device_id": "system", 00:16:01.681 "dma_device_type": 1 00:16:01.681 }, 00:16:01.681 { 00:16:01.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.681 "dma_device_type": 2 00:16:01.681 }, 00:16:01.681 { 00:16:01.681 "dma_device_id": "system", 00:16:01.681 "dma_device_type": 1 00:16:01.681 }, 00:16:01.681 { 00:16:01.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.681 "dma_device_type": 2 00:16:01.681 }, 00:16:01.681 { 00:16:01.681 "dma_device_id": "system", 00:16:01.681 "dma_device_type": 1 00:16:01.681 }, 00:16:01.681 { 00:16:01.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.681 "dma_device_type": 2 00:16:01.681 }, 00:16:01.681 { 00:16:01.681 "dma_device_id": "system", 00:16:01.681 "dma_device_type": 1 00:16:01.681 }, 00:16:01.681 { 00:16:01.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.681 "dma_device_type": 2 00:16:01.681 } 00:16:01.681 ], 00:16:01.681 "driver_specific": { 00:16:01.681 "raid": { 00:16:01.681 "uuid": "60d274da-09cf-48df-a0cd-2c99862cf6e6", 00:16:01.681 "strip_size_kb": 0, 00:16:01.681 "state": "online", 00:16:01.681 "raid_level": "raid1", 00:16:01.681 "superblock": false, 00:16:01.681 "num_base_bdevs": 4, 00:16:01.681 "num_base_bdevs_discovered": 4, 00:16:01.681 "num_base_bdevs_operational": 4, 00:16:01.681 "base_bdevs_list": [ 00:16:01.681 { 00:16:01.681 "name": "NewBaseBdev", 00:16:01.681 "uuid": "02f33ef2-39e8-445a-b94e-cb0b9517bbc3", 00:16:01.681 "is_configured": true, 00:16:01.681 "data_offset": 0, 00:16:01.681 "data_size": 65536 00:16:01.681 }, 00:16:01.681 { 00:16:01.681 "name": "BaseBdev2", 00:16:01.681 "uuid": "f3e8cede-21ba-4da7-a6ac-0f35938996c8", 00:16:01.681 "is_configured": true, 00:16:01.681 "data_offset": 0, 00:16:01.681 "data_size": 65536 00:16:01.681 }, 00:16:01.681 { 00:16:01.681 "name": "BaseBdev3", 00:16:01.681 "uuid": "c4b53d80-fd8e-417d-8467-3aa090b36b8b", 00:16:01.681 "is_configured": true, 00:16:01.681 "data_offset": 0, 00:16:01.681 "data_size": 65536 00:16:01.681 }, 00:16:01.681 { 00:16:01.681 "name": "BaseBdev4", 00:16:01.681 "uuid": "3c117bbd-3bc3-4ac2-ad0b-7c166398a712", 00:16:01.681 "is_configured": true, 00:16:01.681 "data_offset": 0, 00:16:01.681 "data_size": 65536 00:16:01.681 } 00:16:01.681 ] 00:16:01.681 } 00:16:01.681 } 00:16:01.681 }' 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:01.681 BaseBdev2 00:16:01.681 BaseBdev3 00:16:01.681 BaseBdev4' 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.681 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.941 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.941 [2024-11-27 14:15:32.815305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.941 [2024-11-27 14:15:32.815337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.942 [2024-11-27 14:15:32.815450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.942 [2024-11-27 14:15:32.815777] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.942 [2024-11-27 14:15:32.815792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73419 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73419 ']' 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73419 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73419 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73419' 00:16:01.942 killing process with pid 73419 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73419 00:16:01.942 [2024-11-27 14:15:32.865736] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.942 14:15:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73419 00:16:02.510 [2024-11-27 14:15:33.340778] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:03.892 00:16:03.892 real 0m12.699s 00:16:03.892 user 0m20.004s 00:16:03.892 sys 0m2.304s 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.892 ************************************ 00:16:03.892 END TEST raid_state_function_test 00:16:03.892 ************************************ 00:16:03.892 14:15:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:16:03.892 14:15:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:03.892 14:15:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.892 14:15:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.892 ************************************ 00:16:03.892 START TEST raid_state_function_test_sb 00:16:03.892 ************************************ 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74101 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74101' 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:03.892 Process raid pid: 74101 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74101 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74101 ']' 00:16:03.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.892 14:15:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.152 [2024-11-27 14:15:34.876725] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:04.152 [2024-11-27 14:15:34.876953] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.152 [2024-11-27 14:15:35.061086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.412 [2024-11-27 14:15:35.201233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.672 [2024-11-27 14:15:35.451420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.672 [2024-11-27 14:15:35.451455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.931 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.931 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:04.931 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:04.931 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.932 [2024-11-27 14:15:35.786339] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.932 [2024-11-27 14:15:35.786403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.932 [2024-11-27 14:15:35.786416] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.932 [2024-11-27 14:15:35.786428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.932 [2024-11-27 14:15:35.786435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.932 [2024-11-27 14:15:35.786446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.932 [2024-11-27 14:15:35.786453] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:04.932 [2024-11-27 14:15:35.786463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.932 "name": "Existed_Raid", 00:16:04.932 "uuid": "e4a2cc9a-4bbd-445d-97ce-3209c1a8d611", 00:16:04.932 "strip_size_kb": 0, 00:16:04.932 "state": "configuring", 00:16:04.932 "raid_level": "raid1", 00:16:04.932 "superblock": true, 00:16:04.932 "num_base_bdevs": 4, 00:16:04.932 "num_base_bdevs_discovered": 0, 00:16:04.932 "num_base_bdevs_operational": 4, 00:16:04.932 "base_bdevs_list": [ 00:16:04.932 { 00:16:04.932 "name": "BaseBdev1", 00:16:04.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.932 "is_configured": false, 00:16:04.932 "data_offset": 0, 00:16:04.932 "data_size": 0 00:16:04.932 }, 00:16:04.932 { 00:16:04.932 "name": "BaseBdev2", 00:16:04.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.932 "is_configured": false, 00:16:04.932 "data_offset": 0, 00:16:04.932 "data_size": 0 00:16:04.932 }, 00:16:04.932 { 00:16:04.932 "name": "BaseBdev3", 00:16:04.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.932 "is_configured": false, 00:16:04.932 "data_offset": 0, 00:16:04.932 "data_size": 0 00:16:04.932 }, 00:16:04.932 { 00:16:04.932 "name": "BaseBdev4", 00:16:04.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.932 "is_configured": false, 00:16:04.932 "data_offset": 0, 00:16:04.932 "data_size": 0 00:16:04.932 } 00:16:04.932 ] 00:16:04.932 }' 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.932 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.501 [2024-11-27 14:15:36.281441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.501 [2024-11-27 14:15:36.281562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.501 [2024-11-27 14:15:36.293440] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.501 [2024-11-27 14:15:36.293554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.501 [2024-11-27 14:15:36.293592] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.501 [2024-11-27 14:15:36.293621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.501 [2024-11-27 14:15:36.293662] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:05.501 [2024-11-27 14:15:36.293689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:05.501 [2024-11-27 14:15:36.293727] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:05.501 [2024-11-27 14:15:36.293763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.501 [2024-11-27 14:15:36.348725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.501 BaseBdev1 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.501 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.501 [ 00:16:05.501 { 00:16:05.501 "name": "BaseBdev1", 00:16:05.501 "aliases": [ 00:16:05.501 "1e39a925-d405-4d0f-9b67-c94d4bc6f5d3" 00:16:05.501 ], 00:16:05.501 "product_name": "Malloc disk", 00:16:05.501 "block_size": 512, 00:16:05.502 "num_blocks": 65536, 00:16:05.502 "uuid": "1e39a925-d405-4d0f-9b67-c94d4bc6f5d3", 00:16:05.502 "assigned_rate_limits": { 00:16:05.502 "rw_ios_per_sec": 0, 00:16:05.502 "rw_mbytes_per_sec": 0, 00:16:05.502 "r_mbytes_per_sec": 0, 00:16:05.502 "w_mbytes_per_sec": 0 00:16:05.502 }, 00:16:05.502 "claimed": true, 00:16:05.502 "claim_type": "exclusive_write", 00:16:05.502 "zoned": false, 00:16:05.502 "supported_io_types": { 00:16:05.502 "read": true, 00:16:05.502 "write": true, 00:16:05.502 "unmap": true, 00:16:05.502 "flush": true, 00:16:05.502 "reset": true, 00:16:05.502 "nvme_admin": false, 00:16:05.502 "nvme_io": false, 00:16:05.502 "nvme_io_md": false, 00:16:05.502 "write_zeroes": true, 00:16:05.502 "zcopy": true, 00:16:05.502 "get_zone_info": false, 00:16:05.502 "zone_management": false, 00:16:05.502 "zone_append": false, 00:16:05.502 "compare": false, 00:16:05.502 "compare_and_write": false, 00:16:05.502 "abort": true, 00:16:05.502 "seek_hole": false, 00:16:05.502 "seek_data": false, 00:16:05.502 "copy": true, 00:16:05.502 "nvme_iov_md": false 00:16:05.502 }, 00:16:05.502 "memory_domains": [ 00:16:05.502 { 00:16:05.502 "dma_device_id": "system", 00:16:05.502 "dma_device_type": 1 00:16:05.502 }, 00:16:05.502 { 00:16:05.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.502 "dma_device_type": 2 00:16:05.502 } 00:16:05.502 ], 00:16:05.502 "driver_specific": {} 00:16:05.502 } 00:16:05.502 ] 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.502 "name": "Existed_Raid", 00:16:05.502 "uuid": "28f3cf92-d704-4bda-8f19-accc461825c1", 00:16:05.502 "strip_size_kb": 0, 00:16:05.502 "state": "configuring", 00:16:05.502 "raid_level": "raid1", 00:16:05.502 "superblock": true, 00:16:05.502 "num_base_bdevs": 4, 00:16:05.502 "num_base_bdevs_discovered": 1, 00:16:05.502 "num_base_bdevs_operational": 4, 00:16:05.502 "base_bdevs_list": [ 00:16:05.502 { 00:16:05.502 "name": "BaseBdev1", 00:16:05.502 "uuid": "1e39a925-d405-4d0f-9b67-c94d4bc6f5d3", 00:16:05.502 "is_configured": true, 00:16:05.502 "data_offset": 2048, 00:16:05.502 "data_size": 63488 00:16:05.502 }, 00:16:05.502 { 00:16:05.502 "name": "BaseBdev2", 00:16:05.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.502 "is_configured": false, 00:16:05.502 "data_offset": 0, 00:16:05.502 "data_size": 0 00:16:05.502 }, 00:16:05.502 { 00:16:05.502 "name": "BaseBdev3", 00:16:05.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.502 "is_configured": false, 00:16:05.502 "data_offset": 0, 00:16:05.502 "data_size": 0 00:16:05.502 }, 00:16:05.502 { 00:16:05.502 "name": "BaseBdev4", 00:16:05.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.502 "is_configured": false, 00:16:05.502 "data_offset": 0, 00:16:05.502 "data_size": 0 00:16:05.502 } 00:16:05.502 ] 00:16:05.502 }' 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.502 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.072 [2024-11-27 14:15:36.871908] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.072 [2024-11-27 14:15:36.872047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.072 [2024-11-27 14:15:36.883913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.072 [2024-11-27 14:15:36.885892] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.072 [2024-11-27 14:15:36.885969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.072 [2024-11-27 14:15:36.886014] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:06.072 [2024-11-27 14:15:36.886041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:06.072 [2024-11-27 14:15:36.886060] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:06.072 [2024-11-27 14:15:36.886081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.072 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.072 "name": "Existed_Raid", 00:16:06.073 "uuid": "6fec3dfd-5166-47ec-bb95-e550127265b2", 00:16:06.073 "strip_size_kb": 0, 00:16:06.073 "state": "configuring", 00:16:06.073 "raid_level": "raid1", 00:16:06.073 "superblock": true, 00:16:06.073 "num_base_bdevs": 4, 00:16:06.073 "num_base_bdevs_discovered": 1, 00:16:06.073 "num_base_bdevs_operational": 4, 00:16:06.073 "base_bdevs_list": [ 00:16:06.073 { 00:16:06.073 "name": "BaseBdev1", 00:16:06.073 "uuid": "1e39a925-d405-4d0f-9b67-c94d4bc6f5d3", 00:16:06.073 "is_configured": true, 00:16:06.073 "data_offset": 2048, 00:16:06.073 "data_size": 63488 00:16:06.073 }, 00:16:06.073 { 00:16:06.073 "name": "BaseBdev2", 00:16:06.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.073 "is_configured": false, 00:16:06.073 "data_offset": 0, 00:16:06.073 "data_size": 0 00:16:06.073 }, 00:16:06.073 { 00:16:06.073 "name": "BaseBdev3", 00:16:06.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.073 "is_configured": false, 00:16:06.073 "data_offset": 0, 00:16:06.073 "data_size": 0 00:16:06.073 }, 00:16:06.073 { 00:16:06.073 "name": "BaseBdev4", 00:16:06.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.073 "is_configured": false, 00:16:06.073 "data_offset": 0, 00:16:06.073 "data_size": 0 00:16:06.073 } 00:16:06.073 ] 00:16:06.073 }' 00:16:06.073 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.073 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.644 [2024-11-27 14:15:37.399961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.644 BaseBdev2 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.644 [ 00:16:06.644 { 00:16:06.644 "name": "BaseBdev2", 00:16:06.644 "aliases": [ 00:16:06.644 "ea5cfa8b-a617-45fb-bfa2-aa5956aec9d8" 00:16:06.644 ], 00:16:06.644 "product_name": "Malloc disk", 00:16:06.644 "block_size": 512, 00:16:06.644 "num_blocks": 65536, 00:16:06.644 "uuid": "ea5cfa8b-a617-45fb-bfa2-aa5956aec9d8", 00:16:06.644 "assigned_rate_limits": { 00:16:06.644 "rw_ios_per_sec": 0, 00:16:06.644 "rw_mbytes_per_sec": 0, 00:16:06.644 "r_mbytes_per_sec": 0, 00:16:06.644 "w_mbytes_per_sec": 0 00:16:06.644 }, 00:16:06.644 "claimed": true, 00:16:06.644 "claim_type": "exclusive_write", 00:16:06.644 "zoned": false, 00:16:06.644 "supported_io_types": { 00:16:06.644 "read": true, 00:16:06.644 "write": true, 00:16:06.644 "unmap": true, 00:16:06.644 "flush": true, 00:16:06.644 "reset": true, 00:16:06.644 "nvme_admin": false, 00:16:06.644 "nvme_io": false, 00:16:06.644 "nvme_io_md": false, 00:16:06.644 "write_zeroes": true, 00:16:06.644 "zcopy": true, 00:16:06.644 "get_zone_info": false, 00:16:06.644 "zone_management": false, 00:16:06.644 "zone_append": false, 00:16:06.644 "compare": false, 00:16:06.644 "compare_and_write": false, 00:16:06.644 "abort": true, 00:16:06.644 "seek_hole": false, 00:16:06.644 "seek_data": false, 00:16:06.644 "copy": true, 00:16:06.644 "nvme_iov_md": false 00:16:06.644 }, 00:16:06.644 "memory_domains": [ 00:16:06.644 { 00:16:06.644 "dma_device_id": "system", 00:16:06.644 "dma_device_type": 1 00:16:06.644 }, 00:16:06.644 { 00:16:06.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.644 "dma_device_type": 2 00:16:06.644 } 00:16:06.644 ], 00:16:06.644 "driver_specific": {} 00:16:06.644 } 00:16:06.644 ] 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.644 "name": "Existed_Raid", 00:16:06.644 "uuid": "6fec3dfd-5166-47ec-bb95-e550127265b2", 00:16:06.644 "strip_size_kb": 0, 00:16:06.644 "state": "configuring", 00:16:06.644 "raid_level": "raid1", 00:16:06.644 "superblock": true, 00:16:06.644 "num_base_bdevs": 4, 00:16:06.644 "num_base_bdevs_discovered": 2, 00:16:06.644 "num_base_bdevs_operational": 4, 00:16:06.644 "base_bdevs_list": [ 00:16:06.644 { 00:16:06.644 "name": "BaseBdev1", 00:16:06.644 "uuid": "1e39a925-d405-4d0f-9b67-c94d4bc6f5d3", 00:16:06.644 "is_configured": true, 00:16:06.644 "data_offset": 2048, 00:16:06.644 "data_size": 63488 00:16:06.644 }, 00:16:06.644 { 00:16:06.644 "name": "BaseBdev2", 00:16:06.644 "uuid": "ea5cfa8b-a617-45fb-bfa2-aa5956aec9d8", 00:16:06.644 "is_configured": true, 00:16:06.644 "data_offset": 2048, 00:16:06.644 "data_size": 63488 00:16:06.644 }, 00:16:06.644 { 00:16:06.644 "name": "BaseBdev3", 00:16:06.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.644 "is_configured": false, 00:16:06.644 "data_offset": 0, 00:16:06.644 "data_size": 0 00:16:06.644 }, 00:16:06.644 { 00:16:06.644 "name": "BaseBdev4", 00:16:06.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.644 "is_configured": false, 00:16:06.644 "data_offset": 0, 00:16:06.644 "data_size": 0 00:16:06.644 } 00:16:06.644 ] 00:16:06.644 }' 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.644 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.217 [2024-11-27 14:15:37.911998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.217 BaseBdev3 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.217 [ 00:16:07.217 { 00:16:07.217 "name": "BaseBdev3", 00:16:07.217 "aliases": [ 00:16:07.217 "116567e2-a61e-4e05-853c-95d691ebfa7d" 00:16:07.217 ], 00:16:07.217 "product_name": "Malloc disk", 00:16:07.217 "block_size": 512, 00:16:07.217 "num_blocks": 65536, 00:16:07.217 "uuid": "116567e2-a61e-4e05-853c-95d691ebfa7d", 00:16:07.217 "assigned_rate_limits": { 00:16:07.217 "rw_ios_per_sec": 0, 00:16:07.217 "rw_mbytes_per_sec": 0, 00:16:07.217 "r_mbytes_per_sec": 0, 00:16:07.217 "w_mbytes_per_sec": 0 00:16:07.217 }, 00:16:07.217 "claimed": true, 00:16:07.217 "claim_type": "exclusive_write", 00:16:07.217 "zoned": false, 00:16:07.217 "supported_io_types": { 00:16:07.217 "read": true, 00:16:07.217 "write": true, 00:16:07.217 "unmap": true, 00:16:07.217 "flush": true, 00:16:07.217 "reset": true, 00:16:07.217 "nvme_admin": false, 00:16:07.217 "nvme_io": false, 00:16:07.217 "nvme_io_md": false, 00:16:07.217 "write_zeroes": true, 00:16:07.217 "zcopy": true, 00:16:07.217 "get_zone_info": false, 00:16:07.217 "zone_management": false, 00:16:07.217 "zone_append": false, 00:16:07.217 "compare": false, 00:16:07.217 "compare_and_write": false, 00:16:07.217 "abort": true, 00:16:07.217 "seek_hole": false, 00:16:07.217 "seek_data": false, 00:16:07.217 "copy": true, 00:16:07.217 "nvme_iov_md": false 00:16:07.217 }, 00:16:07.217 "memory_domains": [ 00:16:07.217 { 00:16:07.217 "dma_device_id": "system", 00:16:07.217 "dma_device_type": 1 00:16:07.217 }, 00:16:07.217 { 00:16:07.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.217 "dma_device_type": 2 00:16:07.217 } 00:16:07.217 ], 00:16:07.217 "driver_specific": {} 00:16:07.217 } 00:16:07.217 ] 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.217 14:15:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.217 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.217 "name": "Existed_Raid", 00:16:07.217 "uuid": "6fec3dfd-5166-47ec-bb95-e550127265b2", 00:16:07.217 "strip_size_kb": 0, 00:16:07.217 "state": "configuring", 00:16:07.217 "raid_level": "raid1", 00:16:07.217 "superblock": true, 00:16:07.217 "num_base_bdevs": 4, 00:16:07.217 "num_base_bdevs_discovered": 3, 00:16:07.217 "num_base_bdevs_operational": 4, 00:16:07.217 "base_bdevs_list": [ 00:16:07.217 { 00:16:07.217 "name": "BaseBdev1", 00:16:07.217 "uuid": "1e39a925-d405-4d0f-9b67-c94d4bc6f5d3", 00:16:07.217 "is_configured": true, 00:16:07.217 "data_offset": 2048, 00:16:07.217 "data_size": 63488 00:16:07.217 }, 00:16:07.217 { 00:16:07.217 "name": "BaseBdev2", 00:16:07.217 "uuid": "ea5cfa8b-a617-45fb-bfa2-aa5956aec9d8", 00:16:07.217 "is_configured": true, 00:16:07.217 "data_offset": 2048, 00:16:07.217 "data_size": 63488 00:16:07.217 }, 00:16:07.217 { 00:16:07.217 "name": "BaseBdev3", 00:16:07.217 "uuid": "116567e2-a61e-4e05-853c-95d691ebfa7d", 00:16:07.217 "is_configured": true, 00:16:07.217 "data_offset": 2048, 00:16:07.217 "data_size": 63488 00:16:07.217 }, 00:16:07.217 { 00:16:07.217 "name": "BaseBdev4", 00:16:07.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.217 "is_configured": false, 00:16:07.217 "data_offset": 0, 00:16:07.217 "data_size": 0 00:16:07.217 } 00:16:07.217 ] 00:16:07.217 }' 00:16:07.217 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.217 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.478 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:07.478 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.478 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.738 [2024-11-27 14:15:38.437674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:07.738 [2024-11-27 14:15:38.438039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:07.738 [2024-11-27 14:15:38.438097] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:07.738 [2024-11-27 14:15:38.438425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:07.738 BaseBdev4 00:16:07.738 [2024-11-27 14:15:38.438633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:07.738 [2024-11-27 14:15:38.438648] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:07.738 [2024-11-27 14:15:38.438807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.738 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.738 [ 00:16:07.738 { 00:16:07.738 "name": "BaseBdev4", 00:16:07.738 "aliases": [ 00:16:07.738 "41eb26cd-9cb7-496a-af55-f9db305ad5a7" 00:16:07.738 ], 00:16:07.738 "product_name": "Malloc disk", 00:16:07.738 "block_size": 512, 00:16:07.738 "num_blocks": 65536, 00:16:07.738 "uuid": "41eb26cd-9cb7-496a-af55-f9db305ad5a7", 00:16:07.738 "assigned_rate_limits": { 00:16:07.738 "rw_ios_per_sec": 0, 00:16:07.738 "rw_mbytes_per_sec": 0, 00:16:07.738 "r_mbytes_per_sec": 0, 00:16:07.738 "w_mbytes_per_sec": 0 00:16:07.738 }, 00:16:07.738 "claimed": true, 00:16:07.738 "claim_type": "exclusive_write", 00:16:07.738 "zoned": false, 00:16:07.738 "supported_io_types": { 00:16:07.738 "read": true, 00:16:07.738 "write": true, 00:16:07.738 "unmap": true, 00:16:07.738 "flush": true, 00:16:07.738 "reset": true, 00:16:07.738 "nvme_admin": false, 00:16:07.738 "nvme_io": false, 00:16:07.738 "nvme_io_md": false, 00:16:07.738 "write_zeroes": true, 00:16:07.738 "zcopy": true, 00:16:07.738 "get_zone_info": false, 00:16:07.738 "zone_management": false, 00:16:07.738 "zone_append": false, 00:16:07.738 "compare": false, 00:16:07.738 "compare_and_write": false, 00:16:07.738 "abort": true, 00:16:07.738 "seek_hole": false, 00:16:07.738 "seek_data": false, 00:16:07.738 "copy": true, 00:16:07.738 "nvme_iov_md": false 00:16:07.738 }, 00:16:07.738 "memory_domains": [ 00:16:07.738 { 00:16:07.738 "dma_device_id": "system", 00:16:07.738 "dma_device_type": 1 00:16:07.738 }, 00:16:07.738 { 00:16:07.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.738 "dma_device_type": 2 00:16:07.738 } 00:16:07.738 ], 00:16:07.738 "driver_specific": {} 00:16:07.738 } 00:16:07.738 ] 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.739 "name": "Existed_Raid", 00:16:07.739 "uuid": "6fec3dfd-5166-47ec-bb95-e550127265b2", 00:16:07.739 "strip_size_kb": 0, 00:16:07.739 "state": "online", 00:16:07.739 "raid_level": "raid1", 00:16:07.739 "superblock": true, 00:16:07.739 "num_base_bdevs": 4, 00:16:07.739 "num_base_bdevs_discovered": 4, 00:16:07.739 "num_base_bdevs_operational": 4, 00:16:07.739 "base_bdevs_list": [ 00:16:07.739 { 00:16:07.739 "name": "BaseBdev1", 00:16:07.739 "uuid": "1e39a925-d405-4d0f-9b67-c94d4bc6f5d3", 00:16:07.739 "is_configured": true, 00:16:07.739 "data_offset": 2048, 00:16:07.739 "data_size": 63488 00:16:07.739 }, 00:16:07.739 { 00:16:07.739 "name": "BaseBdev2", 00:16:07.739 "uuid": "ea5cfa8b-a617-45fb-bfa2-aa5956aec9d8", 00:16:07.739 "is_configured": true, 00:16:07.739 "data_offset": 2048, 00:16:07.739 "data_size": 63488 00:16:07.739 }, 00:16:07.739 { 00:16:07.739 "name": "BaseBdev3", 00:16:07.739 "uuid": "116567e2-a61e-4e05-853c-95d691ebfa7d", 00:16:07.739 "is_configured": true, 00:16:07.739 "data_offset": 2048, 00:16:07.739 "data_size": 63488 00:16:07.739 }, 00:16:07.739 { 00:16:07.739 "name": "BaseBdev4", 00:16:07.739 "uuid": "41eb26cd-9cb7-496a-af55-f9db305ad5a7", 00:16:07.739 "is_configured": true, 00:16:07.739 "data_offset": 2048, 00:16:07.739 "data_size": 63488 00:16:07.739 } 00:16:07.739 ] 00:16:07.739 }' 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.739 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.999 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:07.999 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:07.999 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:07.999 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:07.999 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:07.999 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:07.999 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:07.999 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.999 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:07.999 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.999 [2024-11-27 14:15:38.937252] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.259 14:15:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.259 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:08.259 "name": "Existed_Raid", 00:16:08.259 "aliases": [ 00:16:08.259 "6fec3dfd-5166-47ec-bb95-e550127265b2" 00:16:08.259 ], 00:16:08.259 "product_name": "Raid Volume", 00:16:08.259 "block_size": 512, 00:16:08.259 "num_blocks": 63488, 00:16:08.259 "uuid": "6fec3dfd-5166-47ec-bb95-e550127265b2", 00:16:08.259 "assigned_rate_limits": { 00:16:08.259 "rw_ios_per_sec": 0, 00:16:08.259 "rw_mbytes_per_sec": 0, 00:16:08.259 "r_mbytes_per_sec": 0, 00:16:08.259 "w_mbytes_per_sec": 0 00:16:08.259 }, 00:16:08.259 "claimed": false, 00:16:08.259 "zoned": false, 00:16:08.259 "supported_io_types": { 00:16:08.259 "read": true, 00:16:08.259 "write": true, 00:16:08.259 "unmap": false, 00:16:08.259 "flush": false, 00:16:08.259 "reset": true, 00:16:08.259 "nvme_admin": false, 00:16:08.259 "nvme_io": false, 00:16:08.259 "nvme_io_md": false, 00:16:08.259 "write_zeroes": true, 00:16:08.259 "zcopy": false, 00:16:08.259 "get_zone_info": false, 00:16:08.259 "zone_management": false, 00:16:08.259 "zone_append": false, 00:16:08.259 "compare": false, 00:16:08.259 "compare_and_write": false, 00:16:08.259 "abort": false, 00:16:08.259 "seek_hole": false, 00:16:08.259 "seek_data": false, 00:16:08.259 "copy": false, 00:16:08.259 "nvme_iov_md": false 00:16:08.259 }, 00:16:08.259 "memory_domains": [ 00:16:08.259 { 00:16:08.259 "dma_device_id": "system", 00:16:08.259 "dma_device_type": 1 00:16:08.259 }, 00:16:08.259 { 00:16:08.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.259 "dma_device_type": 2 00:16:08.259 }, 00:16:08.259 { 00:16:08.259 "dma_device_id": "system", 00:16:08.259 "dma_device_type": 1 00:16:08.259 }, 00:16:08.259 { 00:16:08.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.259 "dma_device_type": 2 00:16:08.259 }, 00:16:08.259 { 00:16:08.259 "dma_device_id": "system", 00:16:08.259 "dma_device_type": 1 00:16:08.259 }, 00:16:08.259 { 00:16:08.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.259 "dma_device_type": 2 00:16:08.259 }, 00:16:08.259 { 00:16:08.259 "dma_device_id": "system", 00:16:08.259 "dma_device_type": 1 00:16:08.259 }, 00:16:08.259 { 00:16:08.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.259 "dma_device_type": 2 00:16:08.259 } 00:16:08.259 ], 00:16:08.259 "driver_specific": { 00:16:08.259 "raid": { 00:16:08.259 "uuid": "6fec3dfd-5166-47ec-bb95-e550127265b2", 00:16:08.259 "strip_size_kb": 0, 00:16:08.259 "state": "online", 00:16:08.259 "raid_level": "raid1", 00:16:08.259 "superblock": true, 00:16:08.259 "num_base_bdevs": 4, 00:16:08.259 "num_base_bdevs_discovered": 4, 00:16:08.259 "num_base_bdevs_operational": 4, 00:16:08.259 "base_bdevs_list": [ 00:16:08.259 { 00:16:08.259 "name": "BaseBdev1", 00:16:08.259 "uuid": "1e39a925-d405-4d0f-9b67-c94d4bc6f5d3", 00:16:08.259 "is_configured": true, 00:16:08.259 "data_offset": 2048, 00:16:08.259 "data_size": 63488 00:16:08.259 }, 00:16:08.259 { 00:16:08.259 "name": "BaseBdev2", 00:16:08.259 "uuid": "ea5cfa8b-a617-45fb-bfa2-aa5956aec9d8", 00:16:08.259 "is_configured": true, 00:16:08.259 "data_offset": 2048, 00:16:08.259 "data_size": 63488 00:16:08.259 }, 00:16:08.259 { 00:16:08.259 "name": "BaseBdev3", 00:16:08.259 "uuid": "116567e2-a61e-4e05-853c-95d691ebfa7d", 00:16:08.259 "is_configured": true, 00:16:08.259 "data_offset": 2048, 00:16:08.259 "data_size": 63488 00:16:08.259 }, 00:16:08.259 { 00:16:08.259 "name": "BaseBdev4", 00:16:08.259 "uuid": "41eb26cd-9cb7-496a-af55-f9db305ad5a7", 00:16:08.259 "is_configured": true, 00:16:08.259 "data_offset": 2048, 00:16:08.259 "data_size": 63488 00:16:08.259 } 00:16:08.259 ] 00:16:08.259 } 00:16:08.259 } 00:16:08.259 }' 00:16:08.259 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.259 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:08.259 BaseBdev2 00:16:08.259 BaseBdev3 00:16:08.259 BaseBdev4' 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.259 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.520 [2024-11-27 14:15:39.260390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.520 "name": "Existed_Raid", 00:16:08.520 "uuid": "6fec3dfd-5166-47ec-bb95-e550127265b2", 00:16:08.520 "strip_size_kb": 0, 00:16:08.520 "state": "online", 00:16:08.520 "raid_level": "raid1", 00:16:08.520 "superblock": true, 00:16:08.520 "num_base_bdevs": 4, 00:16:08.520 "num_base_bdevs_discovered": 3, 00:16:08.520 "num_base_bdevs_operational": 3, 00:16:08.520 "base_bdevs_list": [ 00:16:08.520 { 00:16:08.520 "name": null, 00:16:08.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.520 "is_configured": false, 00:16:08.520 "data_offset": 0, 00:16:08.520 "data_size": 63488 00:16:08.520 }, 00:16:08.520 { 00:16:08.520 "name": "BaseBdev2", 00:16:08.520 "uuid": "ea5cfa8b-a617-45fb-bfa2-aa5956aec9d8", 00:16:08.520 "is_configured": true, 00:16:08.520 "data_offset": 2048, 00:16:08.520 "data_size": 63488 00:16:08.520 }, 00:16:08.520 { 00:16:08.520 "name": "BaseBdev3", 00:16:08.520 "uuid": "116567e2-a61e-4e05-853c-95d691ebfa7d", 00:16:08.520 "is_configured": true, 00:16:08.520 "data_offset": 2048, 00:16:08.520 "data_size": 63488 00:16:08.520 }, 00:16:08.520 { 00:16:08.520 "name": "BaseBdev4", 00:16:08.520 "uuid": "41eb26cd-9cb7-496a-af55-f9db305ad5a7", 00:16:08.520 "is_configured": true, 00:16:08.520 "data_offset": 2048, 00:16:08.520 "data_size": 63488 00:16:08.520 } 00:16:08.520 ] 00:16:08.520 }' 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.520 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.089 [2024-11-27 14:15:39.861232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.089 14:15:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.089 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.089 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.089 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:09.089 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.089 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.089 [2024-11-27 14:15:40.014961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.347 [2024-11-27 14:15:40.165409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:09.347 [2024-11-27 14:15:40.165518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.347 [2024-11-27 14:15:40.261603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.347 [2024-11-27 14:15:40.261743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.347 [2024-11-27 14:15:40.261762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.347 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.607 BaseBdev2 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.607 [ 00:16:09.607 { 00:16:09.607 "name": "BaseBdev2", 00:16:09.607 "aliases": [ 00:16:09.607 "f1332a1d-f6c6-4c8b-b2ea-9857c80b98b6" 00:16:09.607 ], 00:16:09.607 "product_name": "Malloc disk", 00:16:09.607 "block_size": 512, 00:16:09.607 "num_blocks": 65536, 00:16:09.607 "uuid": "f1332a1d-f6c6-4c8b-b2ea-9857c80b98b6", 00:16:09.607 "assigned_rate_limits": { 00:16:09.607 "rw_ios_per_sec": 0, 00:16:09.607 "rw_mbytes_per_sec": 0, 00:16:09.607 "r_mbytes_per_sec": 0, 00:16:09.607 "w_mbytes_per_sec": 0 00:16:09.607 }, 00:16:09.607 "claimed": false, 00:16:09.607 "zoned": false, 00:16:09.607 "supported_io_types": { 00:16:09.607 "read": true, 00:16:09.607 "write": true, 00:16:09.607 "unmap": true, 00:16:09.607 "flush": true, 00:16:09.607 "reset": true, 00:16:09.607 "nvme_admin": false, 00:16:09.607 "nvme_io": false, 00:16:09.607 "nvme_io_md": false, 00:16:09.607 "write_zeroes": true, 00:16:09.607 "zcopy": true, 00:16:09.607 "get_zone_info": false, 00:16:09.607 "zone_management": false, 00:16:09.607 "zone_append": false, 00:16:09.607 "compare": false, 00:16:09.607 "compare_and_write": false, 00:16:09.607 "abort": true, 00:16:09.607 "seek_hole": false, 00:16:09.607 "seek_data": false, 00:16:09.607 "copy": true, 00:16:09.607 "nvme_iov_md": false 00:16:09.607 }, 00:16:09.607 "memory_domains": [ 00:16:09.607 { 00:16:09.607 "dma_device_id": "system", 00:16:09.607 "dma_device_type": 1 00:16:09.607 }, 00:16:09.607 { 00:16:09.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.607 "dma_device_type": 2 00:16:09.607 } 00:16:09.607 ], 00:16:09.607 "driver_specific": {} 00:16:09.607 } 00:16:09.607 ] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.607 BaseBdev3 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.607 [ 00:16:09.607 { 00:16:09.607 "name": "BaseBdev3", 00:16:09.607 "aliases": [ 00:16:09.607 "521d2147-d3d4-4734-949b-1103749b701b" 00:16:09.607 ], 00:16:09.607 "product_name": "Malloc disk", 00:16:09.607 "block_size": 512, 00:16:09.607 "num_blocks": 65536, 00:16:09.607 "uuid": "521d2147-d3d4-4734-949b-1103749b701b", 00:16:09.607 "assigned_rate_limits": { 00:16:09.607 "rw_ios_per_sec": 0, 00:16:09.607 "rw_mbytes_per_sec": 0, 00:16:09.607 "r_mbytes_per_sec": 0, 00:16:09.607 "w_mbytes_per_sec": 0 00:16:09.607 }, 00:16:09.607 "claimed": false, 00:16:09.607 "zoned": false, 00:16:09.607 "supported_io_types": { 00:16:09.607 "read": true, 00:16:09.607 "write": true, 00:16:09.607 "unmap": true, 00:16:09.607 "flush": true, 00:16:09.607 "reset": true, 00:16:09.607 "nvme_admin": false, 00:16:09.607 "nvme_io": false, 00:16:09.607 "nvme_io_md": false, 00:16:09.607 "write_zeroes": true, 00:16:09.607 "zcopy": true, 00:16:09.607 "get_zone_info": false, 00:16:09.607 "zone_management": false, 00:16:09.607 "zone_append": false, 00:16:09.607 "compare": false, 00:16:09.607 "compare_and_write": false, 00:16:09.607 "abort": true, 00:16:09.607 "seek_hole": false, 00:16:09.607 "seek_data": false, 00:16:09.607 "copy": true, 00:16:09.607 "nvme_iov_md": false 00:16:09.607 }, 00:16:09.607 "memory_domains": [ 00:16:09.607 { 00:16:09.607 "dma_device_id": "system", 00:16:09.607 "dma_device_type": 1 00:16:09.607 }, 00:16:09.607 { 00:16:09.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.607 "dma_device_type": 2 00:16:09.607 } 00:16:09.607 ], 00:16:09.607 "driver_specific": {} 00:16:09.607 } 00:16:09.607 ] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.607 BaseBdev4 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.607 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.608 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.608 [ 00:16:09.608 { 00:16:09.608 "name": "BaseBdev4", 00:16:09.608 "aliases": [ 00:16:09.608 "567fd5d6-258c-4376-b35c-a6accd473bbb" 00:16:09.608 ], 00:16:09.608 "product_name": "Malloc disk", 00:16:09.608 "block_size": 512, 00:16:09.608 "num_blocks": 65536, 00:16:09.608 "uuid": "567fd5d6-258c-4376-b35c-a6accd473bbb", 00:16:09.608 "assigned_rate_limits": { 00:16:09.608 "rw_ios_per_sec": 0, 00:16:09.608 "rw_mbytes_per_sec": 0, 00:16:09.608 "r_mbytes_per_sec": 0, 00:16:09.608 "w_mbytes_per_sec": 0 00:16:09.608 }, 00:16:09.608 "claimed": false, 00:16:09.608 "zoned": false, 00:16:09.608 "supported_io_types": { 00:16:09.608 "read": true, 00:16:09.608 "write": true, 00:16:09.608 "unmap": true, 00:16:09.608 "flush": true, 00:16:09.608 "reset": true, 00:16:09.608 "nvme_admin": false, 00:16:09.608 "nvme_io": false, 00:16:09.608 "nvme_io_md": false, 00:16:09.608 "write_zeroes": true, 00:16:09.608 "zcopy": true, 00:16:09.608 "get_zone_info": false, 00:16:09.608 "zone_management": false, 00:16:09.608 "zone_append": false, 00:16:09.608 "compare": false, 00:16:09.608 "compare_and_write": false, 00:16:09.608 "abort": true, 00:16:09.608 "seek_hole": false, 00:16:09.608 "seek_data": false, 00:16:09.608 "copy": true, 00:16:09.608 "nvme_iov_md": false 00:16:09.608 }, 00:16:09.608 "memory_domains": [ 00:16:09.608 { 00:16:09.608 "dma_device_id": "system", 00:16:09.869 "dma_device_type": 1 00:16:09.869 }, 00:16:09.869 { 00:16:09.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.869 "dma_device_type": 2 00:16:09.869 } 00:16:09.869 ], 00:16:09.869 "driver_specific": {} 00:16:09.869 } 00:16:09.869 ] 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.869 [2024-11-27 14:15:40.568673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.869 [2024-11-27 14:15:40.568782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.869 [2024-11-27 14:15:40.568857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.869 [2024-11-27 14:15:40.570844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.869 [2024-11-27 14:15:40.570950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.869 "name": "Existed_Raid", 00:16:09.869 "uuid": "735f5880-86ee-4ce6-bedb-efcb8291db43", 00:16:09.869 "strip_size_kb": 0, 00:16:09.869 "state": "configuring", 00:16:09.869 "raid_level": "raid1", 00:16:09.869 "superblock": true, 00:16:09.869 "num_base_bdevs": 4, 00:16:09.869 "num_base_bdevs_discovered": 3, 00:16:09.869 "num_base_bdevs_operational": 4, 00:16:09.869 "base_bdevs_list": [ 00:16:09.869 { 00:16:09.869 "name": "BaseBdev1", 00:16:09.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.869 "is_configured": false, 00:16:09.869 "data_offset": 0, 00:16:09.869 "data_size": 0 00:16:09.869 }, 00:16:09.869 { 00:16:09.869 "name": "BaseBdev2", 00:16:09.869 "uuid": "f1332a1d-f6c6-4c8b-b2ea-9857c80b98b6", 00:16:09.869 "is_configured": true, 00:16:09.869 "data_offset": 2048, 00:16:09.869 "data_size": 63488 00:16:09.869 }, 00:16:09.869 { 00:16:09.869 "name": "BaseBdev3", 00:16:09.869 "uuid": "521d2147-d3d4-4734-949b-1103749b701b", 00:16:09.869 "is_configured": true, 00:16:09.869 "data_offset": 2048, 00:16:09.869 "data_size": 63488 00:16:09.869 }, 00:16:09.869 { 00:16:09.869 "name": "BaseBdev4", 00:16:09.869 "uuid": "567fd5d6-258c-4376-b35c-a6accd473bbb", 00:16:09.869 "is_configured": true, 00:16:09.869 "data_offset": 2048, 00:16:09.869 "data_size": 63488 00:16:09.869 } 00:16:09.869 ] 00:16:09.869 }' 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.869 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.129 [2024-11-27 14:15:41.035873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.129 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.389 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.389 "name": "Existed_Raid", 00:16:10.389 "uuid": "735f5880-86ee-4ce6-bedb-efcb8291db43", 00:16:10.389 "strip_size_kb": 0, 00:16:10.389 "state": "configuring", 00:16:10.389 "raid_level": "raid1", 00:16:10.389 "superblock": true, 00:16:10.389 "num_base_bdevs": 4, 00:16:10.389 "num_base_bdevs_discovered": 2, 00:16:10.389 "num_base_bdevs_operational": 4, 00:16:10.389 "base_bdevs_list": [ 00:16:10.389 { 00:16:10.389 "name": "BaseBdev1", 00:16:10.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.389 "is_configured": false, 00:16:10.389 "data_offset": 0, 00:16:10.389 "data_size": 0 00:16:10.389 }, 00:16:10.389 { 00:16:10.389 "name": null, 00:16:10.390 "uuid": "f1332a1d-f6c6-4c8b-b2ea-9857c80b98b6", 00:16:10.390 "is_configured": false, 00:16:10.390 "data_offset": 0, 00:16:10.390 "data_size": 63488 00:16:10.390 }, 00:16:10.390 { 00:16:10.390 "name": "BaseBdev3", 00:16:10.390 "uuid": "521d2147-d3d4-4734-949b-1103749b701b", 00:16:10.390 "is_configured": true, 00:16:10.390 "data_offset": 2048, 00:16:10.390 "data_size": 63488 00:16:10.390 }, 00:16:10.390 { 00:16:10.390 "name": "BaseBdev4", 00:16:10.390 "uuid": "567fd5d6-258c-4376-b35c-a6accd473bbb", 00:16:10.390 "is_configured": true, 00:16:10.390 "data_offset": 2048, 00:16:10.390 "data_size": 63488 00:16:10.390 } 00:16:10.390 ] 00:16:10.390 }' 00:16:10.390 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.390 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.649 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.649 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:10.649 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.650 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.650 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.650 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:10.650 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:10.650 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.650 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.910 [2024-11-27 14:15:41.617557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.910 BaseBdev1 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.910 [ 00:16:10.910 { 00:16:10.910 "name": "BaseBdev1", 00:16:10.910 "aliases": [ 00:16:10.910 "52e41127-f231-413e-82b1-9e86c03ba71c" 00:16:10.910 ], 00:16:10.910 "product_name": "Malloc disk", 00:16:10.910 "block_size": 512, 00:16:10.910 "num_blocks": 65536, 00:16:10.910 "uuid": "52e41127-f231-413e-82b1-9e86c03ba71c", 00:16:10.910 "assigned_rate_limits": { 00:16:10.910 "rw_ios_per_sec": 0, 00:16:10.910 "rw_mbytes_per_sec": 0, 00:16:10.910 "r_mbytes_per_sec": 0, 00:16:10.910 "w_mbytes_per_sec": 0 00:16:10.910 }, 00:16:10.910 "claimed": true, 00:16:10.910 "claim_type": "exclusive_write", 00:16:10.910 "zoned": false, 00:16:10.910 "supported_io_types": { 00:16:10.910 "read": true, 00:16:10.910 "write": true, 00:16:10.910 "unmap": true, 00:16:10.910 "flush": true, 00:16:10.910 "reset": true, 00:16:10.910 "nvme_admin": false, 00:16:10.910 "nvme_io": false, 00:16:10.910 "nvme_io_md": false, 00:16:10.910 "write_zeroes": true, 00:16:10.910 "zcopy": true, 00:16:10.910 "get_zone_info": false, 00:16:10.910 "zone_management": false, 00:16:10.910 "zone_append": false, 00:16:10.910 "compare": false, 00:16:10.910 "compare_and_write": false, 00:16:10.910 "abort": true, 00:16:10.910 "seek_hole": false, 00:16:10.910 "seek_data": false, 00:16:10.910 "copy": true, 00:16:10.910 "nvme_iov_md": false 00:16:10.910 }, 00:16:10.910 "memory_domains": [ 00:16:10.910 { 00:16:10.910 "dma_device_id": "system", 00:16:10.910 "dma_device_type": 1 00:16:10.910 }, 00:16:10.910 { 00:16:10.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.910 "dma_device_type": 2 00:16:10.910 } 00:16:10.910 ], 00:16:10.910 "driver_specific": {} 00:16:10.910 } 00:16:10.910 ] 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.910 "name": "Existed_Raid", 00:16:10.910 "uuid": "735f5880-86ee-4ce6-bedb-efcb8291db43", 00:16:10.910 "strip_size_kb": 0, 00:16:10.910 "state": "configuring", 00:16:10.910 "raid_level": "raid1", 00:16:10.910 "superblock": true, 00:16:10.910 "num_base_bdevs": 4, 00:16:10.910 "num_base_bdevs_discovered": 3, 00:16:10.910 "num_base_bdevs_operational": 4, 00:16:10.910 "base_bdevs_list": [ 00:16:10.910 { 00:16:10.910 "name": "BaseBdev1", 00:16:10.910 "uuid": "52e41127-f231-413e-82b1-9e86c03ba71c", 00:16:10.910 "is_configured": true, 00:16:10.910 "data_offset": 2048, 00:16:10.910 "data_size": 63488 00:16:10.910 }, 00:16:10.910 { 00:16:10.910 "name": null, 00:16:10.910 "uuid": "f1332a1d-f6c6-4c8b-b2ea-9857c80b98b6", 00:16:10.910 "is_configured": false, 00:16:10.910 "data_offset": 0, 00:16:10.910 "data_size": 63488 00:16:10.910 }, 00:16:10.910 { 00:16:10.910 "name": "BaseBdev3", 00:16:10.910 "uuid": "521d2147-d3d4-4734-949b-1103749b701b", 00:16:10.910 "is_configured": true, 00:16:10.910 "data_offset": 2048, 00:16:10.910 "data_size": 63488 00:16:10.910 }, 00:16:10.910 { 00:16:10.910 "name": "BaseBdev4", 00:16:10.910 "uuid": "567fd5d6-258c-4376-b35c-a6accd473bbb", 00:16:10.910 "is_configured": true, 00:16:10.910 "data_offset": 2048, 00:16:10.910 "data_size": 63488 00:16:10.910 } 00:16:10.910 ] 00:16:10.910 }' 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.910 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.171 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.171 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:11.171 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.171 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.431 [2024-11-27 14:15:42.172722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.431 "name": "Existed_Raid", 00:16:11.431 "uuid": "735f5880-86ee-4ce6-bedb-efcb8291db43", 00:16:11.431 "strip_size_kb": 0, 00:16:11.431 "state": "configuring", 00:16:11.431 "raid_level": "raid1", 00:16:11.431 "superblock": true, 00:16:11.431 "num_base_bdevs": 4, 00:16:11.431 "num_base_bdevs_discovered": 2, 00:16:11.431 "num_base_bdevs_operational": 4, 00:16:11.431 "base_bdevs_list": [ 00:16:11.431 { 00:16:11.431 "name": "BaseBdev1", 00:16:11.431 "uuid": "52e41127-f231-413e-82b1-9e86c03ba71c", 00:16:11.431 "is_configured": true, 00:16:11.431 "data_offset": 2048, 00:16:11.431 "data_size": 63488 00:16:11.431 }, 00:16:11.431 { 00:16:11.431 "name": null, 00:16:11.431 "uuid": "f1332a1d-f6c6-4c8b-b2ea-9857c80b98b6", 00:16:11.431 "is_configured": false, 00:16:11.431 "data_offset": 0, 00:16:11.431 "data_size": 63488 00:16:11.431 }, 00:16:11.431 { 00:16:11.431 "name": null, 00:16:11.431 "uuid": "521d2147-d3d4-4734-949b-1103749b701b", 00:16:11.431 "is_configured": false, 00:16:11.431 "data_offset": 0, 00:16:11.431 "data_size": 63488 00:16:11.431 }, 00:16:11.431 { 00:16:11.431 "name": "BaseBdev4", 00:16:11.431 "uuid": "567fd5d6-258c-4376-b35c-a6accd473bbb", 00:16:11.431 "is_configured": true, 00:16:11.431 "data_offset": 2048, 00:16:11.431 "data_size": 63488 00:16:11.431 } 00:16:11.431 ] 00:16:11.431 }' 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.431 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.691 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.691 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.691 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:11.691 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.691 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.950 [2024-11-27 14:15:42.667871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.950 "name": "Existed_Raid", 00:16:11.950 "uuid": "735f5880-86ee-4ce6-bedb-efcb8291db43", 00:16:11.950 "strip_size_kb": 0, 00:16:11.950 "state": "configuring", 00:16:11.950 "raid_level": "raid1", 00:16:11.950 "superblock": true, 00:16:11.950 "num_base_bdevs": 4, 00:16:11.950 "num_base_bdevs_discovered": 3, 00:16:11.950 "num_base_bdevs_operational": 4, 00:16:11.950 "base_bdevs_list": [ 00:16:11.950 { 00:16:11.950 "name": "BaseBdev1", 00:16:11.950 "uuid": "52e41127-f231-413e-82b1-9e86c03ba71c", 00:16:11.950 "is_configured": true, 00:16:11.950 "data_offset": 2048, 00:16:11.950 "data_size": 63488 00:16:11.950 }, 00:16:11.950 { 00:16:11.950 "name": null, 00:16:11.950 "uuid": "f1332a1d-f6c6-4c8b-b2ea-9857c80b98b6", 00:16:11.950 "is_configured": false, 00:16:11.950 "data_offset": 0, 00:16:11.950 "data_size": 63488 00:16:11.950 }, 00:16:11.950 { 00:16:11.950 "name": "BaseBdev3", 00:16:11.950 "uuid": "521d2147-d3d4-4734-949b-1103749b701b", 00:16:11.950 "is_configured": true, 00:16:11.950 "data_offset": 2048, 00:16:11.950 "data_size": 63488 00:16:11.950 }, 00:16:11.950 { 00:16:11.950 "name": "BaseBdev4", 00:16:11.950 "uuid": "567fd5d6-258c-4376-b35c-a6accd473bbb", 00:16:11.950 "is_configured": true, 00:16:11.950 "data_offset": 2048, 00:16:11.950 "data_size": 63488 00:16:11.950 } 00:16:11.950 ] 00:16:11.950 }' 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.950 14:15:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.209 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.209 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.209 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.209 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:12.209 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.469 [2024-11-27 14:15:43.202989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.469 "name": "Existed_Raid", 00:16:12.469 "uuid": "735f5880-86ee-4ce6-bedb-efcb8291db43", 00:16:12.469 "strip_size_kb": 0, 00:16:12.469 "state": "configuring", 00:16:12.469 "raid_level": "raid1", 00:16:12.469 "superblock": true, 00:16:12.469 "num_base_bdevs": 4, 00:16:12.469 "num_base_bdevs_discovered": 2, 00:16:12.469 "num_base_bdevs_operational": 4, 00:16:12.469 "base_bdevs_list": [ 00:16:12.469 { 00:16:12.469 "name": null, 00:16:12.469 "uuid": "52e41127-f231-413e-82b1-9e86c03ba71c", 00:16:12.469 "is_configured": false, 00:16:12.469 "data_offset": 0, 00:16:12.469 "data_size": 63488 00:16:12.469 }, 00:16:12.469 { 00:16:12.469 "name": null, 00:16:12.469 "uuid": "f1332a1d-f6c6-4c8b-b2ea-9857c80b98b6", 00:16:12.469 "is_configured": false, 00:16:12.469 "data_offset": 0, 00:16:12.469 "data_size": 63488 00:16:12.469 }, 00:16:12.469 { 00:16:12.469 "name": "BaseBdev3", 00:16:12.469 "uuid": "521d2147-d3d4-4734-949b-1103749b701b", 00:16:12.469 "is_configured": true, 00:16:12.469 "data_offset": 2048, 00:16:12.469 "data_size": 63488 00:16:12.469 }, 00:16:12.469 { 00:16:12.469 "name": "BaseBdev4", 00:16:12.469 "uuid": "567fd5d6-258c-4376-b35c-a6accd473bbb", 00:16:12.469 "is_configured": true, 00:16:12.469 "data_offset": 2048, 00:16:12.469 "data_size": 63488 00:16:12.469 } 00:16:12.469 ] 00:16:12.469 }' 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.469 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.037 [2024-11-27 14:15:43.745527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.037 "name": "Existed_Raid", 00:16:13.037 "uuid": "735f5880-86ee-4ce6-bedb-efcb8291db43", 00:16:13.037 "strip_size_kb": 0, 00:16:13.037 "state": "configuring", 00:16:13.037 "raid_level": "raid1", 00:16:13.037 "superblock": true, 00:16:13.037 "num_base_bdevs": 4, 00:16:13.037 "num_base_bdevs_discovered": 3, 00:16:13.037 "num_base_bdevs_operational": 4, 00:16:13.037 "base_bdevs_list": [ 00:16:13.037 { 00:16:13.037 "name": null, 00:16:13.037 "uuid": "52e41127-f231-413e-82b1-9e86c03ba71c", 00:16:13.037 "is_configured": false, 00:16:13.037 "data_offset": 0, 00:16:13.037 "data_size": 63488 00:16:13.037 }, 00:16:13.037 { 00:16:13.037 "name": "BaseBdev2", 00:16:13.037 "uuid": "f1332a1d-f6c6-4c8b-b2ea-9857c80b98b6", 00:16:13.037 "is_configured": true, 00:16:13.037 "data_offset": 2048, 00:16:13.037 "data_size": 63488 00:16:13.037 }, 00:16:13.037 { 00:16:13.037 "name": "BaseBdev3", 00:16:13.037 "uuid": "521d2147-d3d4-4734-949b-1103749b701b", 00:16:13.037 "is_configured": true, 00:16:13.037 "data_offset": 2048, 00:16:13.037 "data_size": 63488 00:16:13.037 }, 00:16:13.037 { 00:16:13.037 "name": "BaseBdev4", 00:16:13.037 "uuid": "567fd5d6-258c-4376-b35c-a6accd473bbb", 00:16:13.037 "is_configured": true, 00:16:13.037 "data_offset": 2048, 00:16:13.037 "data_size": 63488 00:16:13.037 } 00:16:13.037 ] 00:16:13.037 }' 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.037 14:15:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.297 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:13.297 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.297 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.297 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.297 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.297 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:13.297 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.297 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.297 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.297 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:13.555 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 52e41127-f231-413e-82b1-9e86c03ba71c 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.556 [2024-11-27 14:15:44.330838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:13.556 [2024-11-27 14:15:44.331182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:13.556 [2024-11-27 14:15:44.331240] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:13.556 [2024-11-27 14:15:44.331526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:13.556 [2024-11-27 14:15:44.331738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:13.556 [2024-11-27 14:15:44.331782] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:16:13.556 id_bdev 0x617000008200 00:16:13.556 [2024-11-27 14:15:44.331977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.556 [ 00:16:13.556 { 00:16:13.556 "name": "NewBaseBdev", 00:16:13.556 "aliases": [ 00:16:13.556 "52e41127-f231-413e-82b1-9e86c03ba71c" 00:16:13.556 ], 00:16:13.556 "product_name": "Malloc disk", 00:16:13.556 "block_size": 512, 00:16:13.556 "num_blocks": 65536, 00:16:13.556 "uuid": "52e41127-f231-413e-82b1-9e86c03ba71c", 00:16:13.556 "assigned_rate_limits": { 00:16:13.556 "rw_ios_per_sec": 0, 00:16:13.556 "rw_mbytes_per_sec": 0, 00:16:13.556 "r_mbytes_per_sec": 0, 00:16:13.556 "w_mbytes_per_sec": 0 00:16:13.556 }, 00:16:13.556 "claimed": true, 00:16:13.556 "claim_type": "exclusive_write", 00:16:13.556 "zoned": false, 00:16:13.556 "supported_io_types": { 00:16:13.556 "read": true, 00:16:13.556 "write": true, 00:16:13.556 "unmap": true, 00:16:13.556 "flush": true, 00:16:13.556 "reset": true, 00:16:13.556 "nvme_admin": false, 00:16:13.556 "nvme_io": false, 00:16:13.556 "nvme_io_md": false, 00:16:13.556 "write_zeroes": true, 00:16:13.556 "zcopy": true, 00:16:13.556 "get_zone_info": false, 00:16:13.556 "zone_management": false, 00:16:13.556 "zone_append": false, 00:16:13.556 "compare": false, 00:16:13.556 "compare_and_write": false, 00:16:13.556 "abort": true, 00:16:13.556 "seek_hole": false, 00:16:13.556 "seek_data": false, 00:16:13.556 "copy": true, 00:16:13.556 "nvme_iov_md": false 00:16:13.556 }, 00:16:13.556 "memory_domains": [ 00:16:13.556 { 00:16:13.556 "dma_device_id": "system", 00:16:13.556 "dma_device_type": 1 00:16:13.556 }, 00:16:13.556 { 00:16:13.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.556 "dma_device_type": 2 00:16:13.556 } 00:16:13.556 ], 00:16:13.556 "driver_specific": {} 00:16:13.556 } 00:16:13.556 ] 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.556 "name": "Existed_Raid", 00:16:13.556 "uuid": "735f5880-86ee-4ce6-bedb-efcb8291db43", 00:16:13.556 "strip_size_kb": 0, 00:16:13.556 "state": "online", 00:16:13.556 "raid_level": "raid1", 00:16:13.556 "superblock": true, 00:16:13.556 "num_base_bdevs": 4, 00:16:13.556 "num_base_bdevs_discovered": 4, 00:16:13.556 "num_base_bdevs_operational": 4, 00:16:13.556 "base_bdevs_list": [ 00:16:13.556 { 00:16:13.556 "name": "NewBaseBdev", 00:16:13.556 "uuid": "52e41127-f231-413e-82b1-9e86c03ba71c", 00:16:13.556 "is_configured": true, 00:16:13.556 "data_offset": 2048, 00:16:13.556 "data_size": 63488 00:16:13.556 }, 00:16:13.556 { 00:16:13.556 "name": "BaseBdev2", 00:16:13.556 "uuid": "f1332a1d-f6c6-4c8b-b2ea-9857c80b98b6", 00:16:13.556 "is_configured": true, 00:16:13.556 "data_offset": 2048, 00:16:13.556 "data_size": 63488 00:16:13.556 }, 00:16:13.556 { 00:16:13.556 "name": "BaseBdev3", 00:16:13.556 "uuid": "521d2147-d3d4-4734-949b-1103749b701b", 00:16:13.556 "is_configured": true, 00:16:13.556 "data_offset": 2048, 00:16:13.556 "data_size": 63488 00:16:13.556 }, 00:16:13.556 { 00:16:13.556 "name": "BaseBdev4", 00:16:13.556 "uuid": "567fd5d6-258c-4376-b35c-a6accd473bbb", 00:16:13.556 "is_configured": true, 00:16:13.556 "data_offset": 2048, 00:16:13.556 "data_size": 63488 00:16:13.556 } 00:16:13.556 ] 00:16:13.556 }' 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.556 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.816 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:13.816 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:13.816 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:13.816 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:13.816 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:13.816 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:13.816 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:13.816 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:13.816 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.816 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.076 [2024-11-27 14:15:44.774509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:14.076 "name": "Existed_Raid", 00:16:14.076 "aliases": [ 00:16:14.076 "735f5880-86ee-4ce6-bedb-efcb8291db43" 00:16:14.076 ], 00:16:14.076 "product_name": "Raid Volume", 00:16:14.076 "block_size": 512, 00:16:14.076 "num_blocks": 63488, 00:16:14.076 "uuid": "735f5880-86ee-4ce6-bedb-efcb8291db43", 00:16:14.076 "assigned_rate_limits": { 00:16:14.076 "rw_ios_per_sec": 0, 00:16:14.076 "rw_mbytes_per_sec": 0, 00:16:14.076 "r_mbytes_per_sec": 0, 00:16:14.076 "w_mbytes_per_sec": 0 00:16:14.076 }, 00:16:14.076 "claimed": false, 00:16:14.076 "zoned": false, 00:16:14.076 "supported_io_types": { 00:16:14.076 "read": true, 00:16:14.076 "write": true, 00:16:14.076 "unmap": false, 00:16:14.076 "flush": false, 00:16:14.076 "reset": true, 00:16:14.076 "nvme_admin": false, 00:16:14.076 "nvme_io": false, 00:16:14.076 "nvme_io_md": false, 00:16:14.076 "write_zeroes": true, 00:16:14.076 "zcopy": false, 00:16:14.076 "get_zone_info": false, 00:16:14.076 "zone_management": false, 00:16:14.076 "zone_append": false, 00:16:14.076 "compare": false, 00:16:14.076 "compare_and_write": false, 00:16:14.076 "abort": false, 00:16:14.076 "seek_hole": false, 00:16:14.076 "seek_data": false, 00:16:14.076 "copy": false, 00:16:14.076 "nvme_iov_md": false 00:16:14.076 }, 00:16:14.076 "memory_domains": [ 00:16:14.076 { 00:16:14.076 "dma_device_id": "system", 00:16:14.076 "dma_device_type": 1 00:16:14.076 }, 00:16:14.076 { 00:16:14.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.076 "dma_device_type": 2 00:16:14.076 }, 00:16:14.076 { 00:16:14.076 "dma_device_id": "system", 00:16:14.076 "dma_device_type": 1 00:16:14.076 }, 00:16:14.076 { 00:16:14.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.076 "dma_device_type": 2 00:16:14.076 }, 00:16:14.076 { 00:16:14.076 "dma_device_id": "system", 00:16:14.076 "dma_device_type": 1 00:16:14.076 }, 00:16:14.076 { 00:16:14.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.076 "dma_device_type": 2 00:16:14.076 }, 00:16:14.076 { 00:16:14.076 "dma_device_id": "system", 00:16:14.076 "dma_device_type": 1 00:16:14.076 }, 00:16:14.076 { 00:16:14.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.076 "dma_device_type": 2 00:16:14.076 } 00:16:14.076 ], 00:16:14.076 "driver_specific": { 00:16:14.076 "raid": { 00:16:14.076 "uuid": "735f5880-86ee-4ce6-bedb-efcb8291db43", 00:16:14.076 "strip_size_kb": 0, 00:16:14.076 "state": "online", 00:16:14.076 "raid_level": "raid1", 00:16:14.076 "superblock": true, 00:16:14.076 "num_base_bdevs": 4, 00:16:14.076 "num_base_bdevs_discovered": 4, 00:16:14.076 "num_base_bdevs_operational": 4, 00:16:14.076 "base_bdevs_list": [ 00:16:14.076 { 00:16:14.076 "name": "NewBaseBdev", 00:16:14.076 "uuid": "52e41127-f231-413e-82b1-9e86c03ba71c", 00:16:14.076 "is_configured": true, 00:16:14.076 "data_offset": 2048, 00:16:14.076 "data_size": 63488 00:16:14.076 }, 00:16:14.076 { 00:16:14.076 "name": "BaseBdev2", 00:16:14.076 "uuid": "f1332a1d-f6c6-4c8b-b2ea-9857c80b98b6", 00:16:14.076 "is_configured": true, 00:16:14.076 "data_offset": 2048, 00:16:14.076 "data_size": 63488 00:16:14.076 }, 00:16:14.076 { 00:16:14.076 "name": "BaseBdev3", 00:16:14.076 "uuid": "521d2147-d3d4-4734-949b-1103749b701b", 00:16:14.076 "is_configured": true, 00:16:14.076 "data_offset": 2048, 00:16:14.076 "data_size": 63488 00:16:14.076 }, 00:16:14.076 { 00:16:14.076 "name": "BaseBdev4", 00:16:14.076 "uuid": "567fd5d6-258c-4376-b35c-a6accd473bbb", 00:16:14.076 "is_configured": true, 00:16:14.076 "data_offset": 2048, 00:16:14.076 "data_size": 63488 00:16:14.076 } 00:16:14.076 ] 00:16:14.076 } 00:16:14.076 } 00:16:14.076 }' 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:14.076 BaseBdev2 00:16:14.076 BaseBdev3 00:16:14.076 BaseBdev4' 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.076 14:15:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.076 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.336 [2024-11-27 14:15:45.085582] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.336 [2024-11-27 14:15:45.085650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.336 [2024-11-27 14:15:45.085741] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.336 [2024-11-27 14:15:45.086052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.336 [2024-11-27 14:15:45.086111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74101 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74101 ']' 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74101 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74101 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.336 killing process with pid 74101 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74101' 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74101 00:16:14.336 [2024-11-27 14:15:45.117355] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.336 14:15:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74101 00:16:14.596 [2024-11-27 14:15:45.522023] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.977 14:15:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:15.977 00:16:15.977 real 0m11.910s 00:16:15.977 user 0m19.019s 00:16:15.977 sys 0m2.078s 00:16:15.977 14:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.977 14:15:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.977 ************************************ 00:16:15.977 END TEST raid_state_function_test_sb 00:16:15.977 ************************************ 00:16:15.977 14:15:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:15.977 14:15:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:15.977 14:15:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.977 14:15:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.977 ************************************ 00:16:15.977 START TEST raid_superblock_test 00:16:15.977 ************************************ 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74778 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74778 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74778 ']' 00:16:15.977 14:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.978 14:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.978 14:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.978 14:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.978 14:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.978 [2024-11-27 14:15:46.832832] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:15.978 [2024-11-27 14:15:46.832955] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74778 ] 00:16:16.238 [2024-11-27 14:15:47.010688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.238 [2024-11-27 14:15:47.127929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.497 [2024-11-27 14:15:47.340268] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.497 [2024-11-27 14:15:47.340332] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.067 malloc1 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.067 [2024-11-27 14:15:47.789701] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:17.067 [2024-11-27 14:15:47.789821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.067 [2024-11-27 14:15:47.789861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:17.067 [2024-11-27 14:15:47.789889] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.067 [2024-11-27 14:15:47.791972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.067 [2024-11-27 14:15:47.792070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:17.067 pt1 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.067 malloc2 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.067 [2024-11-27 14:15:47.849858] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.067 [2024-11-27 14:15:47.849919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.067 [2024-11-27 14:15:47.849957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:17.067 [2024-11-27 14:15:47.849967] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.067 [2024-11-27 14:15:47.852174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.067 [2024-11-27 14:15:47.852210] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.067 pt2 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.067 malloc3 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.067 [2024-11-27 14:15:47.917535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:17.067 [2024-11-27 14:15:47.917633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.067 [2024-11-27 14:15:47.917675] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:17.067 [2024-11-27 14:15:47.917724] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.067 [2024-11-27 14:15:47.919850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.067 [2024-11-27 14:15:47.919923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:17.067 pt3 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.067 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.068 malloc4 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.068 [2024-11-27 14:15:47.978446] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:17.068 [2024-11-27 14:15:47.978570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.068 [2024-11-27 14:15:47.978613] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:17.068 [2024-11-27 14:15:47.978650] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.068 [2024-11-27 14:15:47.980816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.068 [2024-11-27 14:15:47.980891] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:17.068 pt4 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.068 [2024-11-27 14:15:47.990455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:17.068 [2024-11-27 14:15:47.992340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.068 [2024-11-27 14:15:47.992413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:17.068 [2024-11-27 14:15:47.992487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:17.068 [2024-11-27 14:15:47.992717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:17.068 [2024-11-27 14:15:47.992746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:17.068 [2024-11-27 14:15:47.993034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:17.068 [2024-11-27 14:15:47.993268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:17.068 [2024-11-27 14:15:47.993284] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:17.068 [2024-11-27 14:15:47.993469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.068 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.068 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.068 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.068 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.068 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.326 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.326 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.326 "name": "raid_bdev1", 00:16:17.326 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:17.326 "strip_size_kb": 0, 00:16:17.326 "state": "online", 00:16:17.326 "raid_level": "raid1", 00:16:17.326 "superblock": true, 00:16:17.326 "num_base_bdevs": 4, 00:16:17.326 "num_base_bdevs_discovered": 4, 00:16:17.326 "num_base_bdevs_operational": 4, 00:16:17.326 "base_bdevs_list": [ 00:16:17.326 { 00:16:17.326 "name": "pt1", 00:16:17.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.326 "is_configured": true, 00:16:17.327 "data_offset": 2048, 00:16:17.327 "data_size": 63488 00:16:17.327 }, 00:16:17.327 { 00:16:17.327 "name": "pt2", 00:16:17.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.327 "is_configured": true, 00:16:17.327 "data_offset": 2048, 00:16:17.327 "data_size": 63488 00:16:17.327 }, 00:16:17.327 { 00:16:17.327 "name": "pt3", 00:16:17.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.327 "is_configured": true, 00:16:17.327 "data_offset": 2048, 00:16:17.327 "data_size": 63488 00:16:17.327 }, 00:16:17.327 { 00:16:17.327 "name": "pt4", 00:16:17.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.327 "is_configured": true, 00:16:17.327 "data_offset": 2048, 00:16:17.327 "data_size": 63488 00:16:17.327 } 00:16:17.327 ] 00:16:17.327 }' 00:16:17.327 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.327 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.585 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:17.585 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:17.585 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:17.586 [2024-11-27 14:15:48.445991] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:17.586 "name": "raid_bdev1", 00:16:17.586 "aliases": [ 00:16:17.586 "71cdd9fe-056c-4336-a071-0db662564f9e" 00:16:17.586 ], 00:16:17.586 "product_name": "Raid Volume", 00:16:17.586 "block_size": 512, 00:16:17.586 "num_blocks": 63488, 00:16:17.586 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:17.586 "assigned_rate_limits": { 00:16:17.586 "rw_ios_per_sec": 0, 00:16:17.586 "rw_mbytes_per_sec": 0, 00:16:17.586 "r_mbytes_per_sec": 0, 00:16:17.586 "w_mbytes_per_sec": 0 00:16:17.586 }, 00:16:17.586 "claimed": false, 00:16:17.586 "zoned": false, 00:16:17.586 "supported_io_types": { 00:16:17.586 "read": true, 00:16:17.586 "write": true, 00:16:17.586 "unmap": false, 00:16:17.586 "flush": false, 00:16:17.586 "reset": true, 00:16:17.586 "nvme_admin": false, 00:16:17.586 "nvme_io": false, 00:16:17.586 "nvme_io_md": false, 00:16:17.586 "write_zeroes": true, 00:16:17.586 "zcopy": false, 00:16:17.586 "get_zone_info": false, 00:16:17.586 "zone_management": false, 00:16:17.586 "zone_append": false, 00:16:17.586 "compare": false, 00:16:17.586 "compare_and_write": false, 00:16:17.586 "abort": false, 00:16:17.586 "seek_hole": false, 00:16:17.586 "seek_data": false, 00:16:17.586 "copy": false, 00:16:17.586 "nvme_iov_md": false 00:16:17.586 }, 00:16:17.586 "memory_domains": [ 00:16:17.586 { 00:16:17.586 "dma_device_id": "system", 00:16:17.586 "dma_device_type": 1 00:16:17.586 }, 00:16:17.586 { 00:16:17.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.586 "dma_device_type": 2 00:16:17.586 }, 00:16:17.586 { 00:16:17.586 "dma_device_id": "system", 00:16:17.586 "dma_device_type": 1 00:16:17.586 }, 00:16:17.586 { 00:16:17.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.586 "dma_device_type": 2 00:16:17.586 }, 00:16:17.586 { 00:16:17.586 "dma_device_id": "system", 00:16:17.586 "dma_device_type": 1 00:16:17.586 }, 00:16:17.586 { 00:16:17.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.586 "dma_device_type": 2 00:16:17.586 }, 00:16:17.586 { 00:16:17.586 "dma_device_id": "system", 00:16:17.586 "dma_device_type": 1 00:16:17.586 }, 00:16:17.586 { 00:16:17.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.586 "dma_device_type": 2 00:16:17.586 } 00:16:17.586 ], 00:16:17.586 "driver_specific": { 00:16:17.586 "raid": { 00:16:17.586 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:17.586 "strip_size_kb": 0, 00:16:17.586 "state": "online", 00:16:17.586 "raid_level": "raid1", 00:16:17.586 "superblock": true, 00:16:17.586 "num_base_bdevs": 4, 00:16:17.586 "num_base_bdevs_discovered": 4, 00:16:17.586 "num_base_bdevs_operational": 4, 00:16:17.586 "base_bdevs_list": [ 00:16:17.586 { 00:16:17.586 "name": "pt1", 00:16:17.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.586 "is_configured": true, 00:16:17.586 "data_offset": 2048, 00:16:17.586 "data_size": 63488 00:16:17.586 }, 00:16:17.586 { 00:16:17.586 "name": "pt2", 00:16:17.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.586 "is_configured": true, 00:16:17.586 "data_offset": 2048, 00:16:17.586 "data_size": 63488 00:16:17.586 }, 00:16:17.586 { 00:16:17.586 "name": "pt3", 00:16:17.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.586 "is_configured": true, 00:16:17.586 "data_offset": 2048, 00:16:17.586 "data_size": 63488 00:16:17.586 }, 00:16:17.586 { 00:16:17.586 "name": "pt4", 00:16:17.586 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.586 "is_configured": true, 00:16:17.586 "data_offset": 2048, 00:16:17.586 "data_size": 63488 00:16:17.586 } 00:16:17.586 ] 00:16:17.586 } 00:16:17.586 } 00:16:17.586 }' 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:17.586 pt2 00:16:17.586 pt3 00:16:17.586 pt4' 00:16:17.586 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.847 [2024-11-27 14:15:48.753458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=71cdd9fe-056c-4336-a071-0db662564f9e 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 71cdd9fe-056c-4336-a071-0db662564f9e ']' 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.847 [2024-11-27 14:15:48.793050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:17.847 [2024-11-27 14:15:48.793078] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.847 [2024-11-27 14:15:48.793219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.847 [2024-11-27 14:15:48.793340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.847 [2024-11-27 14:15:48.793383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:17.847 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.108 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.108 [2024-11-27 14:15:48.952779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:18.108 [2024-11-27 14:15:48.954644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:18.108 [2024-11-27 14:15:48.954693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:18.108 [2024-11-27 14:15:48.954728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:18.108 [2024-11-27 14:15:48.954776] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:18.109 [2024-11-27 14:15:48.954825] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:18.109 [2024-11-27 14:15:48.954844] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:18.109 [2024-11-27 14:15:48.954862] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:18.109 [2024-11-27 14:15:48.954875] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.109 [2024-11-27 14:15:48.954886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:18.109 request: 00:16:18.109 { 00:16:18.109 "name": "raid_bdev1", 00:16:18.109 "raid_level": "raid1", 00:16:18.109 "base_bdevs": [ 00:16:18.109 "malloc1", 00:16:18.109 "malloc2", 00:16:18.109 "malloc3", 00:16:18.109 "malloc4" 00:16:18.109 ], 00:16:18.109 "superblock": false, 00:16:18.109 "method": "bdev_raid_create", 00:16:18.109 "req_id": 1 00:16:18.109 } 00:16:18.109 Got JSON-RPC error response 00:16:18.109 response: 00:16:18.109 { 00:16:18.109 "code": -17, 00:16:18.109 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:18.109 } 00:16:18.109 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:18.109 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:18.109 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:18.109 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:18.109 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:18.109 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.109 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:18.109 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.109 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.109 14:15:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.109 [2024-11-27 14:15:49.016639] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:18.109 [2024-11-27 14:15:49.016731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.109 [2024-11-27 14:15:49.016766] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:18.109 [2024-11-27 14:15:49.016796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.109 [2024-11-27 14:15:49.018929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.109 [2024-11-27 14:15:49.019021] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:18.109 [2024-11-27 14:15:49.019120] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:18.109 [2024-11-27 14:15:49.019232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:18.109 pt1 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.109 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.368 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.368 "name": "raid_bdev1", 00:16:18.368 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:18.368 "strip_size_kb": 0, 00:16:18.368 "state": "configuring", 00:16:18.368 "raid_level": "raid1", 00:16:18.368 "superblock": true, 00:16:18.368 "num_base_bdevs": 4, 00:16:18.368 "num_base_bdevs_discovered": 1, 00:16:18.368 "num_base_bdevs_operational": 4, 00:16:18.368 "base_bdevs_list": [ 00:16:18.368 { 00:16:18.368 "name": "pt1", 00:16:18.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.368 "is_configured": true, 00:16:18.368 "data_offset": 2048, 00:16:18.368 "data_size": 63488 00:16:18.368 }, 00:16:18.368 { 00:16:18.368 "name": null, 00:16:18.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.368 "is_configured": false, 00:16:18.368 "data_offset": 2048, 00:16:18.368 "data_size": 63488 00:16:18.368 }, 00:16:18.368 { 00:16:18.368 "name": null, 00:16:18.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.368 "is_configured": false, 00:16:18.368 "data_offset": 2048, 00:16:18.368 "data_size": 63488 00:16:18.368 }, 00:16:18.368 { 00:16:18.368 "name": null, 00:16:18.368 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.368 "is_configured": false, 00:16:18.368 "data_offset": 2048, 00:16:18.368 "data_size": 63488 00:16:18.368 } 00:16:18.368 ] 00:16:18.368 }' 00:16:18.368 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.368 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.629 [2024-11-27 14:15:49.471942] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.629 [2024-11-27 14:15:49.472113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.629 [2024-11-27 14:15:49.472178] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:18.629 [2024-11-27 14:15:49.472222] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.629 [2024-11-27 14:15:49.472748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.629 [2024-11-27 14:15:49.472819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.629 [2024-11-27 14:15:49.472927] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:18.629 [2024-11-27 14:15:49.472955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.629 pt2 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.629 [2024-11-27 14:15:49.483913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.629 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.629 "name": "raid_bdev1", 00:16:18.629 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:18.629 "strip_size_kb": 0, 00:16:18.629 "state": "configuring", 00:16:18.629 "raid_level": "raid1", 00:16:18.629 "superblock": true, 00:16:18.629 "num_base_bdevs": 4, 00:16:18.629 "num_base_bdevs_discovered": 1, 00:16:18.629 "num_base_bdevs_operational": 4, 00:16:18.629 "base_bdevs_list": [ 00:16:18.629 { 00:16:18.629 "name": "pt1", 00:16:18.629 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.629 "is_configured": true, 00:16:18.629 "data_offset": 2048, 00:16:18.629 "data_size": 63488 00:16:18.629 }, 00:16:18.629 { 00:16:18.629 "name": null, 00:16:18.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.629 "is_configured": false, 00:16:18.629 "data_offset": 0, 00:16:18.629 "data_size": 63488 00:16:18.629 }, 00:16:18.629 { 00:16:18.629 "name": null, 00:16:18.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.629 "is_configured": false, 00:16:18.629 "data_offset": 2048, 00:16:18.629 "data_size": 63488 00:16:18.629 }, 00:16:18.629 { 00:16:18.629 "name": null, 00:16:18.629 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.629 "is_configured": false, 00:16:18.629 "data_offset": 2048, 00:16:18.629 "data_size": 63488 00:16:18.629 } 00:16:18.629 ] 00:16:18.630 }' 00:16:18.630 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.630 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.200 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:19.200 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:19.200 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:19.200 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.200 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.200 [2024-11-27 14:15:49.939176] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:19.200 [2024-11-27 14:15:49.939243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.200 [2024-11-27 14:15:49.939265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:19.200 [2024-11-27 14:15:49.939274] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.200 [2024-11-27 14:15:49.939741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.200 [2024-11-27 14:15:49.939771] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:19.201 [2024-11-27 14:15:49.939856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:19.201 [2024-11-27 14:15:49.939877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:19.201 pt2 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.201 [2024-11-27 14:15:49.951133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:19.201 [2024-11-27 14:15:49.951185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.201 [2024-11-27 14:15:49.951204] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:19.201 [2024-11-27 14:15:49.951212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.201 [2024-11-27 14:15:49.951618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.201 [2024-11-27 14:15:49.951650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:19.201 [2024-11-27 14:15:49.951726] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:19.201 [2024-11-27 14:15:49.951747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:19.201 pt3 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.201 [2024-11-27 14:15:49.963075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:19.201 [2024-11-27 14:15:49.963172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.201 [2024-11-27 14:15:49.963207] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:19.201 [2024-11-27 14:15:49.963238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.201 [2024-11-27 14:15:49.963613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.201 [2024-11-27 14:15:49.963668] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:19.201 [2024-11-27 14:15:49.963755] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:19.201 [2024-11-27 14:15:49.963806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:19.201 [2024-11-27 14:15:49.963978] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:19.201 [2024-11-27 14:15:49.964039] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:19.201 [2024-11-27 14:15:49.964304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:19.201 [2024-11-27 14:15:49.964507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:19.201 [2024-11-27 14:15:49.964553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:19.201 [2024-11-27 14:15:49.964730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.201 pt4 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.201 14:15:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.201 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.201 "name": "raid_bdev1", 00:16:19.201 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:19.201 "strip_size_kb": 0, 00:16:19.201 "state": "online", 00:16:19.201 "raid_level": "raid1", 00:16:19.201 "superblock": true, 00:16:19.201 "num_base_bdevs": 4, 00:16:19.201 "num_base_bdevs_discovered": 4, 00:16:19.201 "num_base_bdevs_operational": 4, 00:16:19.201 "base_bdevs_list": [ 00:16:19.201 { 00:16:19.201 "name": "pt1", 00:16:19.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.201 "is_configured": true, 00:16:19.201 "data_offset": 2048, 00:16:19.201 "data_size": 63488 00:16:19.201 }, 00:16:19.201 { 00:16:19.201 "name": "pt2", 00:16:19.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.201 "is_configured": true, 00:16:19.201 "data_offset": 2048, 00:16:19.201 "data_size": 63488 00:16:19.201 }, 00:16:19.201 { 00:16:19.201 "name": "pt3", 00:16:19.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.201 "is_configured": true, 00:16:19.201 "data_offset": 2048, 00:16:19.201 "data_size": 63488 00:16:19.201 }, 00:16:19.201 { 00:16:19.201 "name": "pt4", 00:16:19.201 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.201 "is_configured": true, 00:16:19.201 "data_offset": 2048, 00:16:19.201 "data_size": 63488 00:16:19.201 } 00:16:19.201 ] 00:16:19.201 }' 00:16:19.201 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.201 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.461 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:19.461 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:19.461 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:19.461 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:19.461 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:19.461 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:19.461 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.461 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:19.461 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.461 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.461 [2024-11-27 14:15:50.378739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.461 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.721 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:19.721 "name": "raid_bdev1", 00:16:19.721 "aliases": [ 00:16:19.721 "71cdd9fe-056c-4336-a071-0db662564f9e" 00:16:19.721 ], 00:16:19.721 "product_name": "Raid Volume", 00:16:19.721 "block_size": 512, 00:16:19.721 "num_blocks": 63488, 00:16:19.721 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:19.721 "assigned_rate_limits": { 00:16:19.721 "rw_ios_per_sec": 0, 00:16:19.721 "rw_mbytes_per_sec": 0, 00:16:19.721 "r_mbytes_per_sec": 0, 00:16:19.721 "w_mbytes_per_sec": 0 00:16:19.721 }, 00:16:19.721 "claimed": false, 00:16:19.721 "zoned": false, 00:16:19.721 "supported_io_types": { 00:16:19.721 "read": true, 00:16:19.721 "write": true, 00:16:19.722 "unmap": false, 00:16:19.722 "flush": false, 00:16:19.722 "reset": true, 00:16:19.722 "nvme_admin": false, 00:16:19.722 "nvme_io": false, 00:16:19.722 "nvme_io_md": false, 00:16:19.722 "write_zeroes": true, 00:16:19.722 "zcopy": false, 00:16:19.722 "get_zone_info": false, 00:16:19.722 "zone_management": false, 00:16:19.722 "zone_append": false, 00:16:19.722 "compare": false, 00:16:19.722 "compare_and_write": false, 00:16:19.722 "abort": false, 00:16:19.722 "seek_hole": false, 00:16:19.722 "seek_data": false, 00:16:19.722 "copy": false, 00:16:19.722 "nvme_iov_md": false 00:16:19.722 }, 00:16:19.722 "memory_domains": [ 00:16:19.722 { 00:16:19.722 "dma_device_id": "system", 00:16:19.722 "dma_device_type": 1 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.722 "dma_device_type": 2 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "dma_device_id": "system", 00:16:19.722 "dma_device_type": 1 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.722 "dma_device_type": 2 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "dma_device_id": "system", 00:16:19.722 "dma_device_type": 1 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.722 "dma_device_type": 2 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "dma_device_id": "system", 00:16:19.722 "dma_device_type": 1 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.722 "dma_device_type": 2 00:16:19.722 } 00:16:19.722 ], 00:16:19.722 "driver_specific": { 00:16:19.722 "raid": { 00:16:19.722 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:19.722 "strip_size_kb": 0, 00:16:19.722 "state": "online", 00:16:19.722 "raid_level": "raid1", 00:16:19.722 "superblock": true, 00:16:19.722 "num_base_bdevs": 4, 00:16:19.722 "num_base_bdevs_discovered": 4, 00:16:19.722 "num_base_bdevs_operational": 4, 00:16:19.722 "base_bdevs_list": [ 00:16:19.722 { 00:16:19.722 "name": "pt1", 00:16:19.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:19.722 "is_configured": true, 00:16:19.722 "data_offset": 2048, 00:16:19.722 "data_size": 63488 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "name": "pt2", 00:16:19.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.722 "is_configured": true, 00:16:19.722 "data_offset": 2048, 00:16:19.722 "data_size": 63488 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "name": "pt3", 00:16:19.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.722 "is_configured": true, 00:16:19.722 "data_offset": 2048, 00:16:19.722 "data_size": 63488 00:16:19.722 }, 00:16:19.722 { 00:16:19.722 "name": "pt4", 00:16:19.722 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.722 "is_configured": true, 00:16:19.722 "data_offset": 2048, 00:16:19.722 "data_size": 63488 00:16:19.722 } 00:16:19.722 ] 00:16:19.722 } 00:16:19.722 } 00:16:19.722 }' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:19.722 pt2 00:16:19.722 pt3 00:16:19.722 pt4' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:19.722 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.982 [2024-11-27 14:15:50.714084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 71cdd9fe-056c-4336-a071-0db662564f9e '!=' 71cdd9fe-056c-4336-a071-0db662564f9e ']' 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.982 [2024-11-27 14:15:50.757780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.982 "name": "raid_bdev1", 00:16:19.982 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:19.982 "strip_size_kb": 0, 00:16:19.982 "state": "online", 00:16:19.982 "raid_level": "raid1", 00:16:19.982 "superblock": true, 00:16:19.982 "num_base_bdevs": 4, 00:16:19.982 "num_base_bdevs_discovered": 3, 00:16:19.982 "num_base_bdevs_operational": 3, 00:16:19.982 "base_bdevs_list": [ 00:16:19.982 { 00:16:19.982 "name": null, 00:16:19.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.982 "is_configured": false, 00:16:19.982 "data_offset": 0, 00:16:19.982 "data_size": 63488 00:16:19.982 }, 00:16:19.982 { 00:16:19.982 "name": "pt2", 00:16:19.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.982 "is_configured": true, 00:16:19.982 "data_offset": 2048, 00:16:19.982 "data_size": 63488 00:16:19.982 }, 00:16:19.982 { 00:16:19.982 "name": "pt3", 00:16:19.982 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.982 "is_configured": true, 00:16:19.982 "data_offset": 2048, 00:16:19.982 "data_size": 63488 00:16:19.982 }, 00:16:19.982 { 00:16:19.982 "name": "pt4", 00:16:19.982 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.982 "is_configured": true, 00:16:19.982 "data_offset": 2048, 00:16:19.982 "data_size": 63488 00:16:19.982 } 00:16:19.982 ] 00:16:19.982 }' 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.982 14:15:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.242 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:20.242 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.242 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.242 [2024-11-27 14:15:51.185074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.242 [2024-11-27 14:15:51.185107] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.242 [2024-11-27 14:15:51.185208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.242 [2024-11-27 14:15:51.185299] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.242 [2024-11-27 14:15:51.185310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:20.242 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.242 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.242 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.242 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.242 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.502 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.502 [2024-11-27 14:15:51.280901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:20.502 [2024-11-27 14:15:51.280954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.502 [2024-11-27 14:15:51.280972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:20.502 [2024-11-27 14:15:51.280981] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.502 [2024-11-27 14:15:51.283253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.502 [2024-11-27 14:15:51.283288] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:20.502 [2024-11-27 14:15:51.283368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:20.502 [2024-11-27 14:15:51.283414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:20.502 pt2 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.503 "name": "raid_bdev1", 00:16:20.503 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:20.503 "strip_size_kb": 0, 00:16:20.503 "state": "configuring", 00:16:20.503 "raid_level": "raid1", 00:16:20.503 "superblock": true, 00:16:20.503 "num_base_bdevs": 4, 00:16:20.503 "num_base_bdevs_discovered": 1, 00:16:20.503 "num_base_bdevs_operational": 3, 00:16:20.503 "base_bdevs_list": [ 00:16:20.503 { 00:16:20.503 "name": null, 00:16:20.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.503 "is_configured": false, 00:16:20.503 "data_offset": 2048, 00:16:20.503 "data_size": 63488 00:16:20.503 }, 00:16:20.503 { 00:16:20.503 "name": "pt2", 00:16:20.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.503 "is_configured": true, 00:16:20.503 "data_offset": 2048, 00:16:20.503 "data_size": 63488 00:16:20.503 }, 00:16:20.503 { 00:16:20.503 "name": null, 00:16:20.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.503 "is_configured": false, 00:16:20.503 "data_offset": 2048, 00:16:20.503 "data_size": 63488 00:16:20.503 }, 00:16:20.503 { 00:16:20.503 "name": null, 00:16:20.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.503 "is_configured": false, 00:16:20.503 "data_offset": 2048, 00:16:20.503 "data_size": 63488 00:16:20.503 } 00:16:20.503 ] 00:16:20.503 }' 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.503 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.764 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:20.764 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:20.764 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:20.764 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.764 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.764 [2024-11-27 14:15:51.712221] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:20.764 [2024-11-27 14:15:51.712351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.764 [2024-11-27 14:15:51.712394] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:20.764 [2024-11-27 14:15:51.712430] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.764 [2024-11-27 14:15:51.712948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.764 [2024-11-27 14:15:51.713018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:20.764 [2024-11-27 14:15:51.713154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:20.764 [2024-11-27 14:15:51.713218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:20.764 pt3 00:16:20.764 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.023 "name": "raid_bdev1", 00:16:21.023 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:21.023 "strip_size_kb": 0, 00:16:21.023 "state": "configuring", 00:16:21.023 "raid_level": "raid1", 00:16:21.023 "superblock": true, 00:16:21.023 "num_base_bdevs": 4, 00:16:21.023 "num_base_bdevs_discovered": 2, 00:16:21.023 "num_base_bdevs_operational": 3, 00:16:21.023 "base_bdevs_list": [ 00:16:21.023 { 00:16:21.023 "name": null, 00:16:21.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.023 "is_configured": false, 00:16:21.023 "data_offset": 2048, 00:16:21.023 "data_size": 63488 00:16:21.023 }, 00:16:21.023 { 00:16:21.023 "name": "pt2", 00:16:21.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.023 "is_configured": true, 00:16:21.023 "data_offset": 2048, 00:16:21.023 "data_size": 63488 00:16:21.023 }, 00:16:21.023 { 00:16:21.023 "name": "pt3", 00:16:21.023 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.023 "is_configured": true, 00:16:21.023 "data_offset": 2048, 00:16:21.023 "data_size": 63488 00:16:21.023 }, 00:16:21.023 { 00:16:21.023 "name": null, 00:16:21.023 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.023 "is_configured": false, 00:16:21.023 "data_offset": 2048, 00:16:21.023 "data_size": 63488 00:16:21.023 } 00:16:21.023 ] 00:16:21.023 }' 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.023 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.283 [2024-11-27 14:15:52.143499] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:21.283 [2024-11-27 14:15:52.143570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.283 [2024-11-27 14:15:52.143598] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:21.283 [2024-11-27 14:15:52.143607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.283 [2024-11-27 14:15:52.144045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.283 [2024-11-27 14:15:52.144083] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:21.283 [2024-11-27 14:15:52.144188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:21.283 [2024-11-27 14:15:52.144218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:21.283 [2024-11-27 14:15:52.144352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:21.283 [2024-11-27 14:15:52.144364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:21.283 [2024-11-27 14:15:52.144649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:21.283 [2024-11-27 14:15:52.144833] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:21.283 [2024-11-27 14:15:52.144847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:21.283 [2024-11-27 14:15:52.145000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.283 pt4 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.283 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.283 "name": "raid_bdev1", 00:16:21.283 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:21.283 "strip_size_kb": 0, 00:16:21.283 "state": "online", 00:16:21.283 "raid_level": "raid1", 00:16:21.283 "superblock": true, 00:16:21.283 "num_base_bdevs": 4, 00:16:21.283 "num_base_bdevs_discovered": 3, 00:16:21.283 "num_base_bdevs_operational": 3, 00:16:21.283 "base_bdevs_list": [ 00:16:21.283 { 00:16:21.283 "name": null, 00:16:21.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.283 "is_configured": false, 00:16:21.283 "data_offset": 2048, 00:16:21.283 "data_size": 63488 00:16:21.283 }, 00:16:21.283 { 00:16:21.283 "name": "pt2", 00:16:21.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.284 "is_configured": true, 00:16:21.284 "data_offset": 2048, 00:16:21.284 "data_size": 63488 00:16:21.284 }, 00:16:21.284 { 00:16:21.284 "name": "pt3", 00:16:21.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.284 "is_configured": true, 00:16:21.284 "data_offset": 2048, 00:16:21.284 "data_size": 63488 00:16:21.284 }, 00:16:21.284 { 00:16:21.284 "name": "pt4", 00:16:21.284 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.284 "is_configured": true, 00:16:21.284 "data_offset": 2048, 00:16:21.284 "data_size": 63488 00:16:21.284 } 00:16:21.284 ] 00:16:21.284 }' 00:16:21.284 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.284 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.858 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.858 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.858 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.858 [2024-11-27 14:15:52.570710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.858 [2024-11-27 14:15:52.570791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.858 [2024-11-27 14:15:52.570892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.859 [2024-11-27 14:15:52.570979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.859 [2024-11-27 14:15:52.571043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.859 [2024-11-27 14:15:52.642595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.859 [2024-11-27 14:15:52.642696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.859 [2024-11-27 14:15:52.642733] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:21.859 [2024-11-27 14:15:52.642765] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.859 [2024-11-27 14:15:52.645087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.859 [2024-11-27 14:15:52.645196] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.859 [2024-11-27 14:15:52.645314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:21.859 [2024-11-27 14:15:52.645377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.859 [2024-11-27 14:15:52.645542] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:21.859 [2024-11-27 14:15:52.645599] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.859 [2024-11-27 14:15:52.645629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:21.859 [2024-11-27 14:15:52.645721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.859 [2024-11-27 14:15:52.645855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:21.859 pt1 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.859 "name": "raid_bdev1", 00:16:21.859 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:21.859 "strip_size_kb": 0, 00:16:21.859 "state": "configuring", 00:16:21.859 "raid_level": "raid1", 00:16:21.859 "superblock": true, 00:16:21.859 "num_base_bdevs": 4, 00:16:21.859 "num_base_bdevs_discovered": 2, 00:16:21.859 "num_base_bdevs_operational": 3, 00:16:21.859 "base_bdevs_list": [ 00:16:21.859 { 00:16:21.859 "name": null, 00:16:21.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.859 "is_configured": false, 00:16:21.859 "data_offset": 2048, 00:16:21.859 "data_size": 63488 00:16:21.859 }, 00:16:21.859 { 00:16:21.859 "name": "pt2", 00:16:21.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.859 "is_configured": true, 00:16:21.859 "data_offset": 2048, 00:16:21.859 "data_size": 63488 00:16:21.859 }, 00:16:21.859 { 00:16:21.859 "name": "pt3", 00:16:21.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.859 "is_configured": true, 00:16:21.859 "data_offset": 2048, 00:16:21.859 "data_size": 63488 00:16:21.859 }, 00:16:21.859 { 00:16:21.859 "name": null, 00:16:21.859 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.859 "is_configured": false, 00:16:21.859 "data_offset": 2048, 00:16:21.859 "data_size": 63488 00:16:21.859 } 00:16:21.859 ] 00:16:21.859 }' 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.859 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.135 [2024-11-27 14:15:53.057913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:22.135 [2024-11-27 14:15:53.057978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.135 [2024-11-27 14:15:53.058000] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:22.135 [2024-11-27 14:15:53.058009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.135 [2024-11-27 14:15:53.058446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.135 [2024-11-27 14:15:53.058477] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:22.135 [2024-11-27 14:15:53.058562] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:22.135 [2024-11-27 14:15:53.058583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:22.135 [2024-11-27 14:15:53.058719] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:22.135 [2024-11-27 14:15:53.058732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:22.135 [2024-11-27 14:15:53.058981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:22.135 [2024-11-27 14:15:53.059150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:22.135 [2024-11-27 14:15:53.059163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:22.135 [2024-11-27 14:15:53.059298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.135 pt4 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.135 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.394 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.394 "name": "raid_bdev1", 00:16:22.394 "uuid": "71cdd9fe-056c-4336-a071-0db662564f9e", 00:16:22.394 "strip_size_kb": 0, 00:16:22.394 "state": "online", 00:16:22.394 "raid_level": "raid1", 00:16:22.394 "superblock": true, 00:16:22.394 "num_base_bdevs": 4, 00:16:22.394 "num_base_bdevs_discovered": 3, 00:16:22.394 "num_base_bdevs_operational": 3, 00:16:22.394 "base_bdevs_list": [ 00:16:22.394 { 00:16:22.394 "name": null, 00:16:22.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.394 "is_configured": false, 00:16:22.394 "data_offset": 2048, 00:16:22.394 "data_size": 63488 00:16:22.394 }, 00:16:22.394 { 00:16:22.394 "name": "pt2", 00:16:22.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.394 "is_configured": true, 00:16:22.394 "data_offset": 2048, 00:16:22.394 "data_size": 63488 00:16:22.394 }, 00:16:22.394 { 00:16:22.394 "name": "pt3", 00:16:22.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.394 "is_configured": true, 00:16:22.394 "data_offset": 2048, 00:16:22.394 "data_size": 63488 00:16:22.394 }, 00:16:22.394 { 00:16:22.394 "name": "pt4", 00:16:22.394 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.394 "is_configured": true, 00:16:22.394 "data_offset": 2048, 00:16:22.394 "data_size": 63488 00:16:22.394 } 00:16:22.394 ] 00:16:22.394 }' 00:16:22.394 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.394 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.653 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:22.653 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.653 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.653 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:22.653 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.653 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:22.653 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:22.653 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:22.653 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.653 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.653 [2024-11-27 14:15:53.585368] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.912 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.912 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 71cdd9fe-056c-4336-a071-0db662564f9e '!=' 71cdd9fe-056c-4336-a071-0db662564f9e ']' 00:16:22.912 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74778 00:16:22.912 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74778 ']' 00:16:22.912 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74778 00:16:22.912 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:22.912 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.912 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74778 00:16:22.912 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.912 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.912 killing process with pid 74778 00:16:22.912 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74778' 00:16:22.913 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74778 00:16:22.913 [2024-11-27 14:15:53.666300] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.913 [2024-11-27 14:15:53.666406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.913 [2024-11-27 14:15:53.666479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.913 [2024-11-27 14:15:53.666490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:22.913 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74778 00:16:23.171 [2024-11-27 14:15:54.054915] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.546 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:24.546 00:16:24.546 real 0m8.430s 00:16:24.546 user 0m13.325s 00:16:24.546 sys 0m1.471s 00:16:24.546 14:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.546 ************************************ 00:16:24.546 END TEST raid_superblock_test 00:16:24.546 ************************************ 00:16:24.546 14:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.547 14:15:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:16:24.547 14:15:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:24.547 14:15:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.547 14:15:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.547 ************************************ 00:16:24.547 START TEST raid_read_error_test 00:16:24.547 ************************************ 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Gr3RlN9GIy 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75260 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75260 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75260 ']' 00:16:24.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.547 14:15:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.547 [2024-11-27 14:15:55.349321] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:24.547 [2024-11-27 14:15:55.349436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75260 ] 00:16:24.805 [2024-11-27 14:15:55.525970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.805 [2024-11-27 14:15:55.653370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.063 [2024-11-27 14:15:55.875538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.063 [2024-11-27 14:15:55.875602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.322 BaseBdev1_malloc 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.322 true 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.322 [2024-11-27 14:15:56.248997] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:25.322 [2024-11-27 14:15:56.249055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.322 [2024-11-27 14:15:56.249074] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:25.322 [2024-11-27 14:15:56.249085] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.322 [2024-11-27 14:15:56.251269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.322 [2024-11-27 14:15:56.251308] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:25.322 BaseBdev1 00:16:25.322 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.323 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:25.323 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:25.323 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.323 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 BaseBdev2_malloc 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 true 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 [2024-11-27 14:15:56.317749] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:25.583 [2024-11-27 14:15:56.317847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.583 [2024-11-27 14:15:56.317881] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:25.583 [2024-11-27 14:15:56.317910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.583 [2024-11-27 14:15:56.319976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.583 [2024-11-27 14:15:56.320074] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:25.583 BaseBdev2 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 BaseBdev3_malloc 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 true 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 [2024-11-27 14:15:56.396717] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:25.583 [2024-11-27 14:15:56.396813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.583 [2024-11-27 14:15:56.396849] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:25.583 [2024-11-27 14:15:56.396881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.583 [2024-11-27 14:15:56.398923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.583 [2024-11-27 14:15:56.399010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:25.583 BaseBdev3 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 BaseBdev4_malloc 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 true 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 [2024-11-27 14:15:56.465091] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:25.583 [2024-11-27 14:15:56.465158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.583 [2024-11-27 14:15:56.465176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:25.583 [2024-11-27 14:15:56.465186] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.583 [2024-11-27 14:15:56.467187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.583 [2024-11-27 14:15:56.467271] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:25.583 BaseBdev4 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.583 [2024-11-27 14:15:56.477148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.583 [2024-11-27 14:15:56.478978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.583 [2024-11-27 14:15:56.479050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.583 [2024-11-27 14:15:56.479109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:25.583 [2024-11-27 14:15:56.479349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:25.583 [2024-11-27 14:15:56.479370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:25.583 [2024-11-27 14:15:56.479597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:25.583 [2024-11-27 14:15:56.479759] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:25.583 [2024-11-27 14:15:56.479768] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:25.583 [2024-11-27 14:15:56.479937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.583 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.584 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.584 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.584 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.584 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.584 "name": "raid_bdev1", 00:16:25.584 "uuid": "8d8b1d2f-3742-4ef2-9e38-af5f9af177a0", 00:16:25.584 "strip_size_kb": 0, 00:16:25.584 "state": "online", 00:16:25.584 "raid_level": "raid1", 00:16:25.584 "superblock": true, 00:16:25.584 "num_base_bdevs": 4, 00:16:25.584 "num_base_bdevs_discovered": 4, 00:16:25.584 "num_base_bdevs_operational": 4, 00:16:25.584 "base_bdevs_list": [ 00:16:25.584 { 00:16:25.584 "name": "BaseBdev1", 00:16:25.584 "uuid": "b0ba3f9c-2c98-51a7-b1ba-e2a059315054", 00:16:25.584 "is_configured": true, 00:16:25.584 "data_offset": 2048, 00:16:25.584 "data_size": 63488 00:16:25.584 }, 00:16:25.584 { 00:16:25.584 "name": "BaseBdev2", 00:16:25.584 "uuid": "e0014cfb-d2eb-5395-8555-6393dcadf7d2", 00:16:25.584 "is_configured": true, 00:16:25.584 "data_offset": 2048, 00:16:25.584 "data_size": 63488 00:16:25.584 }, 00:16:25.584 { 00:16:25.584 "name": "BaseBdev3", 00:16:25.584 "uuid": "81b46c05-6171-503c-9ed7-da51fcad1535", 00:16:25.584 "is_configured": true, 00:16:25.584 "data_offset": 2048, 00:16:25.584 "data_size": 63488 00:16:25.584 }, 00:16:25.584 { 00:16:25.584 "name": "BaseBdev4", 00:16:25.584 "uuid": "b4b15a8a-4ea7-50dd-add3-37785ae10379", 00:16:25.584 "is_configured": true, 00:16:25.584 "data_offset": 2048, 00:16:25.584 "data_size": 63488 00:16:25.584 } 00:16:25.584 ] 00:16:25.584 }' 00:16:25.584 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.843 14:15:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.104 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:26.104 14:15:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:26.104 [2024-11-27 14:15:56.997652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.041 "name": "raid_bdev1", 00:16:27.041 "uuid": "8d8b1d2f-3742-4ef2-9e38-af5f9af177a0", 00:16:27.041 "strip_size_kb": 0, 00:16:27.041 "state": "online", 00:16:27.041 "raid_level": "raid1", 00:16:27.041 "superblock": true, 00:16:27.041 "num_base_bdevs": 4, 00:16:27.041 "num_base_bdevs_discovered": 4, 00:16:27.041 "num_base_bdevs_operational": 4, 00:16:27.041 "base_bdevs_list": [ 00:16:27.041 { 00:16:27.041 "name": "BaseBdev1", 00:16:27.041 "uuid": "b0ba3f9c-2c98-51a7-b1ba-e2a059315054", 00:16:27.041 "is_configured": true, 00:16:27.041 "data_offset": 2048, 00:16:27.041 "data_size": 63488 00:16:27.041 }, 00:16:27.041 { 00:16:27.041 "name": "BaseBdev2", 00:16:27.041 "uuid": "e0014cfb-d2eb-5395-8555-6393dcadf7d2", 00:16:27.041 "is_configured": true, 00:16:27.041 "data_offset": 2048, 00:16:27.041 "data_size": 63488 00:16:27.041 }, 00:16:27.041 { 00:16:27.041 "name": "BaseBdev3", 00:16:27.041 "uuid": "81b46c05-6171-503c-9ed7-da51fcad1535", 00:16:27.041 "is_configured": true, 00:16:27.041 "data_offset": 2048, 00:16:27.041 "data_size": 63488 00:16:27.041 }, 00:16:27.041 { 00:16:27.041 "name": "BaseBdev4", 00:16:27.041 "uuid": "b4b15a8a-4ea7-50dd-add3-37785ae10379", 00:16:27.041 "is_configured": true, 00:16:27.041 "data_offset": 2048, 00:16:27.041 "data_size": 63488 00:16:27.041 } 00:16:27.041 ] 00:16:27.041 }' 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.041 14:15:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.609 [2024-11-27 14:15:58.356229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.609 [2024-11-27 14:15:58.356339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.609 [2024-11-27 14:15:58.359351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.609 [2024-11-27 14:15:58.359460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.609 [2024-11-27 14:15:58.359603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.609 [2024-11-27 14:15:58.359653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:27.609 { 00:16:27.609 "results": [ 00:16:27.609 { 00:16:27.609 "job": "raid_bdev1", 00:16:27.609 "core_mask": "0x1", 00:16:27.609 "workload": "randrw", 00:16:27.609 "percentage": 50, 00:16:27.609 "status": "finished", 00:16:27.609 "queue_depth": 1, 00:16:27.609 "io_size": 131072, 00:16:27.609 "runtime": 1.359646, 00:16:27.609 "iops": 10399.765821397628, 00:16:27.609 "mibps": 1299.9707276747035, 00:16:27.609 "io_failed": 0, 00:16:27.609 "io_timeout": 0, 00:16:27.609 "avg_latency_us": 93.42129052580866, 00:16:27.609 "min_latency_us": 23.58777292576419, 00:16:27.609 "max_latency_us": 1545.3903930131005 00:16:27.609 } 00:16:27.609 ], 00:16:27.609 "core_count": 1 00:16:27.609 } 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75260 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75260 ']' 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75260 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75260 00:16:27.609 killing process with pid 75260 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75260' 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75260 00:16:27.609 [2024-11-27 14:15:58.402063] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.609 14:15:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75260 00:16:27.868 [2024-11-27 14:15:58.725150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.246 14:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:29.246 14:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Gr3RlN9GIy 00:16:29.246 14:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:29.246 14:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:29.246 14:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:29.246 14:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:29.246 14:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:29.246 14:15:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:29.246 00:16:29.246 real 0m4.657s 00:16:29.246 user 0m5.502s 00:16:29.246 sys 0m0.565s 00:16:29.246 14:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.246 14:15:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.246 ************************************ 00:16:29.246 END TEST raid_read_error_test 00:16:29.246 ************************************ 00:16:29.246 14:15:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:16:29.246 14:15:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:29.246 14:15:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.246 14:15:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.246 ************************************ 00:16:29.246 START TEST raid_write_error_test 00:16:29.246 ************************************ 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QcMmyRNah8 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75411 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75411 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75411 ']' 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.246 14:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.246 [2024-11-27 14:16:00.074784] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:29.246 [2024-11-27 14:16:00.074902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75411 ] 00:16:29.505 [2024-11-27 14:16:00.246405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.505 [2024-11-27 14:16:00.362887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.764 [2024-11-27 14:16:00.557438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.764 [2024-11-27 14:16:00.557499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.024 BaseBdev1_malloc 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.024 true 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.024 [2024-11-27 14:16:00.966423] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:30.024 [2024-11-27 14:16:00.966477] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.024 [2024-11-27 14:16:00.966513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:30.024 [2024-11-27 14:16:00.966523] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.024 [2024-11-27 14:16:00.968600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.024 [2024-11-27 14:16:00.968642] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:30.024 BaseBdev1 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.024 14:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.284 BaseBdev2_malloc 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.284 true 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.284 [2024-11-27 14:16:01.031981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:30.284 [2024-11-27 14:16:01.032057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.284 [2024-11-27 14:16:01.032074] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:30.284 [2024-11-27 14:16:01.032084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.284 [2024-11-27 14:16:01.034151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.284 [2024-11-27 14:16:01.034185] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:30.284 BaseBdev2 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.284 BaseBdev3_malloc 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.284 true 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.284 [2024-11-27 14:16:01.125250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:30.284 [2024-11-27 14:16:01.125357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.284 [2024-11-27 14:16:01.125378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:30.284 [2024-11-27 14:16:01.125388] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.284 [2024-11-27 14:16:01.127435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.284 [2024-11-27 14:16:01.127508] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:30.284 BaseBdev3 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.284 BaseBdev4_malloc 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.284 true 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.284 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.284 [2024-11-27 14:16:01.189624] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:30.284 [2024-11-27 14:16:01.189676] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.284 [2024-11-27 14:16:01.189709] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:30.284 [2024-11-27 14:16:01.189719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.285 [2024-11-27 14:16:01.191817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.285 [2024-11-27 14:16:01.191901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:30.285 BaseBdev4 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.285 [2024-11-27 14:16:01.201659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.285 [2024-11-27 14:16:01.203456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:30.285 [2024-11-27 14:16:01.203542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.285 [2024-11-27 14:16:01.203601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:30.285 [2024-11-27 14:16:01.203818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:30.285 [2024-11-27 14:16:01.203833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:30.285 [2024-11-27 14:16:01.204090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:30.285 [2024-11-27 14:16:01.204268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:30.285 [2024-11-27 14:16:01.204278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:30.285 [2024-11-27 14:16:01.204419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.285 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.544 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.544 "name": "raid_bdev1", 00:16:30.544 "uuid": "21cc0ff3-1e53-4ecb-a095-5a728eb3c85e", 00:16:30.544 "strip_size_kb": 0, 00:16:30.544 "state": "online", 00:16:30.544 "raid_level": "raid1", 00:16:30.544 "superblock": true, 00:16:30.544 "num_base_bdevs": 4, 00:16:30.544 "num_base_bdevs_discovered": 4, 00:16:30.544 "num_base_bdevs_operational": 4, 00:16:30.544 "base_bdevs_list": [ 00:16:30.544 { 00:16:30.544 "name": "BaseBdev1", 00:16:30.544 "uuid": "13144320-3362-5b19-a796-cd9a9ac23460", 00:16:30.544 "is_configured": true, 00:16:30.544 "data_offset": 2048, 00:16:30.544 "data_size": 63488 00:16:30.544 }, 00:16:30.544 { 00:16:30.544 "name": "BaseBdev2", 00:16:30.544 "uuid": "d5227cee-67c7-55f7-aeb5-915983389462", 00:16:30.544 "is_configured": true, 00:16:30.544 "data_offset": 2048, 00:16:30.544 "data_size": 63488 00:16:30.544 }, 00:16:30.544 { 00:16:30.544 "name": "BaseBdev3", 00:16:30.544 "uuid": "668ec38c-0fea-595c-b686-206026bc4438", 00:16:30.544 "is_configured": true, 00:16:30.544 "data_offset": 2048, 00:16:30.544 "data_size": 63488 00:16:30.544 }, 00:16:30.544 { 00:16:30.545 "name": "BaseBdev4", 00:16:30.545 "uuid": "3203205e-072b-5506-a77c-5d7fc37f0929", 00:16:30.545 "is_configured": true, 00:16:30.545 "data_offset": 2048, 00:16:30.545 "data_size": 63488 00:16:30.545 } 00:16:30.545 ] 00:16:30.545 }' 00:16:30.545 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.545 14:16:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.804 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:30.804 14:16:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:30.804 [2024-11-27 14:16:01.714034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.742 [2024-11-27 14:16:02.645040] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:31.742 [2024-11-27 14:16:02.645210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.742 [2024-11-27 14:16:02.645501] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.742 14:16:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.002 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.002 "name": "raid_bdev1", 00:16:32.003 "uuid": "21cc0ff3-1e53-4ecb-a095-5a728eb3c85e", 00:16:32.003 "strip_size_kb": 0, 00:16:32.003 "state": "online", 00:16:32.003 "raid_level": "raid1", 00:16:32.003 "superblock": true, 00:16:32.003 "num_base_bdevs": 4, 00:16:32.003 "num_base_bdevs_discovered": 3, 00:16:32.003 "num_base_bdevs_operational": 3, 00:16:32.003 "base_bdevs_list": [ 00:16:32.003 { 00:16:32.003 "name": null, 00:16:32.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.003 "is_configured": false, 00:16:32.003 "data_offset": 0, 00:16:32.003 "data_size": 63488 00:16:32.003 }, 00:16:32.003 { 00:16:32.003 "name": "BaseBdev2", 00:16:32.003 "uuid": "d5227cee-67c7-55f7-aeb5-915983389462", 00:16:32.003 "is_configured": true, 00:16:32.003 "data_offset": 2048, 00:16:32.003 "data_size": 63488 00:16:32.003 }, 00:16:32.003 { 00:16:32.003 "name": "BaseBdev3", 00:16:32.003 "uuid": "668ec38c-0fea-595c-b686-206026bc4438", 00:16:32.003 "is_configured": true, 00:16:32.003 "data_offset": 2048, 00:16:32.003 "data_size": 63488 00:16:32.003 }, 00:16:32.003 { 00:16:32.003 "name": "BaseBdev4", 00:16:32.003 "uuid": "3203205e-072b-5506-a77c-5d7fc37f0929", 00:16:32.003 "is_configured": true, 00:16:32.003 "data_offset": 2048, 00:16:32.003 "data_size": 63488 00:16:32.003 } 00:16:32.003 ] 00:16:32.003 }' 00:16:32.003 14:16:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.003 14:16:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.262 14:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.262 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.262 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.262 [2024-11-27 14:16:03.076972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.262 [2024-11-27 14:16:03.077088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.263 [2024-11-27 14:16:03.079786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.263 [2024-11-27 14:16:03.079828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.263 [2024-11-27 14:16:03.079928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.263 [2024-11-27 14:16:03.079940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:32.263 { 00:16:32.263 "results": [ 00:16:32.263 { 00:16:32.263 "job": "raid_bdev1", 00:16:32.263 "core_mask": "0x1", 00:16:32.263 "workload": "randrw", 00:16:32.263 "percentage": 50, 00:16:32.263 "status": "finished", 00:16:32.263 "queue_depth": 1, 00:16:32.263 "io_size": 131072, 00:16:32.263 "runtime": 1.363815, 00:16:32.263 "iops": 11499.360250473854, 00:16:32.263 "mibps": 1437.4200313092317, 00:16:32.263 "io_failed": 0, 00:16:32.263 "io_timeout": 0, 00:16:32.263 "avg_latency_us": 84.29300716961346, 00:16:32.263 "min_latency_us": 23.02882096069869, 00:16:32.263 "max_latency_us": 1502.46288209607 00:16:32.263 } 00:16:32.263 ], 00:16:32.263 "core_count": 1 00:16:32.263 } 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75411 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75411 ']' 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75411 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75411 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75411' 00:16:32.263 killing process with pid 75411 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75411 00:16:32.263 [2024-11-27 14:16:03.114882] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.263 14:16:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75411 00:16:32.571 [2024-11-27 14:16:03.433491] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:33.952 14:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QcMmyRNah8 00:16:33.952 14:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:33.952 14:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:33.952 14:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:33.952 14:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:33.952 14:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:33.952 14:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:33.952 14:16:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:33.952 00:16:33.952 real 0m4.654s 00:16:33.952 user 0m5.461s 00:16:33.952 sys 0m0.557s 00:16:33.952 14:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.952 14:16:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.952 ************************************ 00:16:33.952 END TEST raid_write_error_test 00:16:33.952 ************************************ 00:16:33.952 14:16:04 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:16:33.952 14:16:04 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:33.952 14:16:04 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:16:33.952 14:16:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:33.952 14:16:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.952 14:16:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:33.952 ************************************ 00:16:33.952 START TEST raid_rebuild_test 00:16:33.952 ************************************ 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75549 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75549 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75549 ']' 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.952 14:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.952 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:33.952 Zero copy mechanism will not be used. 00:16:33.952 [2024-11-27 14:16:04.794363] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:33.952 [2024-11-27 14:16:04.794499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75549 ] 00:16:34.213 [2024-11-27 14:16:04.972149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.213 [2024-11-27 14:16:05.093944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.473 [2024-11-27 14:16:05.298181] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.473 [2024-11-27 14:16:05.298241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.732 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.732 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:34.732 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:34.732 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:34.732 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.732 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.992 BaseBdev1_malloc 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.992 [2024-11-27 14:16:05.694935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:34.992 [2024-11-27 14:16:05.695000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.992 [2024-11-27 14:16:05.695025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:34.992 [2024-11-27 14:16:05.695036] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.992 [2024-11-27 14:16:05.697376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.992 [2024-11-27 14:16:05.697470] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:34.992 BaseBdev1 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.992 BaseBdev2_malloc 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.992 [2024-11-27 14:16:05.752365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:34.992 [2024-11-27 14:16:05.752437] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.992 [2024-11-27 14:16:05.752465] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:34.992 [2024-11-27 14:16:05.752477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.992 [2024-11-27 14:16:05.754827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.992 [2024-11-27 14:16:05.754953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:34.992 BaseBdev2 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.992 spare_malloc 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.992 spare_delay 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.992 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.993 [2024-11-27 14:16:05.832866] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:34.993 [2024-11-27 14:16:05.832932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.993 [2024-11-27 14:16:05.832955] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:34.993 [2024-11-27 14:16:05.832966] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.993 [2024-11-27 14:16:05.835191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.993 [2024-11-27 14:16:05.835229] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:34.993 spare 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.993 [2024-11-27 14:16:05.844890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.993 [2024-11-27 14:16:05.846704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:34.993 [2024-11-27 14:16:05.846792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:34.993 [2024-11-27 14:16:05.846805] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:34.993 [2024-11-27 14:16:05.847046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:34.993 [2024-11-27 14:16:05.847226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:34.993 [2024-11-27 14:16:05.847238] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:34.993 [2024-11-27 14:16:05.847391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.993 "name": "raid_bdev1", 00:16:34.993 "uuid": "7533cfbd-3549-4914-b932-e0ca1915ef4b", 00:16:34.993 "strip_size_kb": 0, 00:16:34.993 "state": "online", 00:16:34.993 "raid_level": "raid1", 00:16:34.993 "superblock": false, 00:16:34.993 "num_base_bdevs": 2, 00:16:34.993 "num_base_bdevs_discovered": 2, 00:16:34.993 "num_base_bdevs_operational": 2, 00:16:34.993 "base_bdevs_list": [ 00:16:34.993 { 00:16:34.993 "name": "BaseBdev1", 00:16:34.993 "uuid": "9af22482-dc92-5827-8659-8d7a23e25028", 00:16:34.993 "is_configured": true, 00:16:34.993 "data_offset": 0, 00:16:34.993 "data_size": 65536 00:16:34.993 }, 00:16:34.993 { 00:16:34.993 "name": "BaseBdev2", 00:16:34.993 "uuid": "15920bf9-4b6e-5031-a6a0-3cb79b84c0a4", 00:16:34.993 "is_configured": true, 00:16:34.993 "data_offset": 0, 00:16:34.993 "data_size": 65536 00:16:34.993 } 00:16:34.993 ] 00:16:34.993 }' 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.993 14:16:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.562 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.562 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:35.562 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.562 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.562 [2024-11-27 14:16:06.284515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.562 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.562 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:35.562 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:35.562 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.562 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.562 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:35.563 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:35.823 [2024-11-27 14:16:06.559759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:35.823 /dev/nbd0 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:35.823 1+0 records in 00:16:35.823 1+0 records out 00:16:35.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574821 s, 7.1 MB/s 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:35.823 14:16:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:40.020 65536+0 records in 00:16:40.020 65536+0 records out 00:16:40.020 33554432 bytes (34 MB, 32 MiB) copied, 4.20996 s, 8.0 MB/s 00:16:40.020 14:16:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:40.020 14:16:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.020 14:16:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:40.020 14:16:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:40.020 14:16:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:40.020 14:16:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:40.020 14:16:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:40.280 [2024-11-27 14:16:11.080391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.280 [2024-11-27 14:16:11.096475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.280 14:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.281 14:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.281 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.281 "name": "raid_bdev1", 00:16:40.281 "uuid": "7533cfbd-3549-4914-b932-e0ca1915ef4b", 00:16:40.281 "strip_size_kb": 0, 00:16:40.281 "state": "online", 00:16:40.281 "raid_level": "raid1", 00:16:40.281 "superblock": false, 00:16:40.281 "num_base_bdevs": 2, 00:16:40.281 "num_base_bdevs_discovered": 1, 00:16:40.281 "num_base_bdevs_operational": 1, 00:16:40.281 "base_bdevs_list": [ 00:16:40.281 { 00:16:40.281 "name": null, 00:16:40.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.281 "is_configured": false, 00:16:40.281 "data_offset": 0, 00:16:40.281 "data_size": 65536 00:16:40.281 }, 00:16:40.281 { 00:16:40.281 "name": "BaseBdev2", 00:16:40.281 "uuid": "15920bf9-4b6e-5031-a6a0-3cb79b84c0a4", 00:16:40.281 "is_configured": true, 00:16:40.281 "data_offset": 0, 00:16:40.281 "data_size": 65536 00:16:40.281 } 00:16:40.281 ] 00:16:40.281 }' 00:16:40.281 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.281 14:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.850 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:40.850 14:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.850 14:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.850 [2024-11-27 14:16:11.539898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:40.850 [2024-11-27 14:16:11.557823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:16:40.850 14:16:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.850 [2024-11-27 14:16:11.559766] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:40.850 14:16:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.787 "name": "raid_bdev1", 00:16:41.787 "uuid": "7533cfbd-3549-4914-b932-e0ca1915ef4b", 00:16:41.787 "strip_size_kb": 0, 00:16:41.787 "state": "online", 00:16:41.787 "raid_level": "raid1", 00:16:41.787 "superblock": false, 00:16:41.787 "num_base_bdevs": 2, 00:16:41.787 "num_base_bdevs_discovered": 2, 00:16:41.787 "num_base_bdevs_operational": 2, 00:16:41.787 "process": { 00:16:41.787 "type": "rebuild", 00:16:41.787 "target": "spare", 00:16:41.787 "progress": { 00:16:41.787 "blocks": 20480, 00:16:41.787 "percent": 31 00:16:41.787 } 00:16:41.787 }, 00:16:41.787 "base_bdevs_list": [ 00:16:41.787 { 00:16:41.787 "name": "spare", 00:16:41.787 "uuid": "7eb1f6a2-d639-5e2f-a975-26d24d6a86b5", 00:16:41.787 "is_configured": true, 00:16:41.787 "data_offset": 0, 00:16:41.787 "data_size": 65536 00:16:41.787 }, 00:16:41.787 { 00:16:41.787 "name": "BaseBdev2", 00:16:41.787 "uuid": "15920bf9-4b6e-5031-a6a0-3cb79b84c0a4", 00:16:41.787 "is_configured": true, 00:16:41.787 "data_offset": 0, 00:16:41.787 "data_size": 65536 00:16:41.787 } 00:16:41.787 ] 00:16:41.787 }' 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.787 14:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.787 [2024-11-27 14:16:12.728307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.047 [2024-11-27 14:16:12.766133] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:42.047 [2024-11-27 14:16:12.766295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.047 [2024-11-27 14:16:12.766335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.047 [2024-11-27 14:16:12.766362] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.047 "name": "raid_bdev1", 00:16:42.047 "uuid": "7533cfbd-3549-4914-b932-e0ca1915ef4b", 00:16:42.047 "strip_size_kb": 0, 00:16:42.047 "state": "online", 00:16:42.047 "raid_level": "raid1", 00:16:42.047 "superblock": false, 00:16:42.047 "num_base_bdevs": 2, 00:16:42.047 "num_base_bdevs_discovered": 1, 00:16:42.047 "num_base_bdevs_operational": 1, 00:16:42.047 "base_bdevs_list": [ 00:16:42.047 { 00:16:42.047 "name": null, 00:16:42.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.047 "is_configured": false, 00:16:42.047 "data_offset": 0, 00:16:42.047 "data_size": 65536 00:16:42.047 }, 00:16:42.047 { 00:16:42.047 "name": "BaseBdev2", 00:16:42.047 "uuid": "15920bf9-4b6e-5031-a6a0-3cb79b84c0a4", 00:16:42.047 "is_configured": true, 00:16:42.047 "data_offset": 0, 00:16:42.047 "data_size": 65536 00:16:42.047 } 00:16:42.047 ] 00:16:42.047 }' 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.047 14:16:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.307 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.307 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.307 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.307 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.307 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.566 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.566 14:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.566 14:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.566 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.566 14:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.566 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.567 "name": "raid_bdev1", 00:16:42.567 "uuid": "7533cfbd-3549-4914-b932-e0ca1915ef4b", 00:16:42.567 "strip_size_kb": 0, 00:16:42.567 "state": "online", 00:16:42.567 "raid_level": "raid1", 00:16:42.567 "superblock": false, 00:16:42.567 "num_base_bdevs": 2, 00:16:42.567 "num_base_bdevs_discovered": 1, 00:16:42.567 "num_base_bdevs_operational": 1, 00:16:42.567 "base_bdevs_list": [ 00:16:42.567 { 00:16:42.567 "name": null, 00:16:42.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.567 "is_configured": false, 00:16:42.567 "data_offset": 0, 00:16:42.567 "data_size": 65536 00:16:42.567 }, 00:16:42.567 { 00:16:42.567 "name": "BaseBdev2", 00:16:42.567 "uuid": "15920bf9-4b6e-5031-a6a0-3cb79b84c0a4", 00:16:42.567 "is_configured": true, 00:16:42.567 "data_offset": 0, 00:16:42.567 "data_size": 65536 00:16:42.567 } 00:16:42.567 ] 00:16:42.567 }' 00:16:42.567 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.567 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:42.567 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.567 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:42.567 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:42.567 14:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.567 14:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.567 [2024-11-27 14:16:13.416871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.567 [2024-11-27 14:16:13.433989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:16:42.567 14:16:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.567 14:16:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:42.567 [2024-11-27 14:16:13.436145] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:43.505 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.505 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.505 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.505 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.505 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.505 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.505 14:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.505 14:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.505 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.505 14:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.765 "name": "raid_bdev1", 00:16:43.765 "uuid": "7533cfbd-3549-4914-b932-e0ca1915ef4b", 00:16:43.765 "strip_size_kb": 0, 00:16:43.765 "state": "online", 00:16:43.765 "raid_level": "raid1", 00:16:43.765 "superblock": false, 00:16:43.765 "num_base_bdevs": 2, 00:16:43.765 "num_base_bdevs_discovered": 2, 00:16:43.765 "num_base_bdevs_operational": 2, 00:16:43.765 "process": { 00:16:43.765 "type": "rebuild", 00:16:43.765 "target": "spare", 00:16:43.765 "progress": { 00:16:43.765 "blocks": 20480, 00:16:43.765 "percent": 31 00:16:43.765 } 00:16:43.765 }, 00:16:43.765 "base_bdevs_list": [ 00:16:43.765 { 00:16:43.765 "name": "spare", 00:16:43.765 "uuid": "7eb1f6a2-d639-5e2f-a975-26d24d6a86b5", 00:16:43.765 "is_configured": true, 00:16:43.765 "data_offset": 0, 00:16:43.765 "data_size": 65536 00:16:43.765 }, 00:16:43.765 { 00:16:43.765 "name": "BaseBdev2", 00:16:43.765 "uuid": "15920bf9-4b6e-5031-a6a0-3cb79b84c0a4", 00:16:43.765 "is_configured": true, 00:16:43.765 "data_offset": 0, 00:16:43.765 "data_size": 65536 00:16:43.765 } 00:16:43.765 ] 00:16:43.765 }' 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=382 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.765 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.765 "name": "raid_bdev1", 00:16:43.765 "uuid": "7533cfbd-3549-4914-b932-e0ca1915ef4b", 00:16:43.765 "strip_size_kb": 0, 00:16:43.765 "state": "online", 00:16:43.765 "raid_level": "raid1", 00:16:43.766 "superblock": false, 00:16:43.766 "num_base_bdevs": 2, 00:16:43.766 "num_base_bdevs_discovered": 2, 00:16:43.766 "num_base_bdevs_operational": 2, 00:16:43.766 "process": { 00:16:43.766 "type": "rebuild", 00:16:43.766 "target": "spare", 00:16:43.766 "progress": { 00:16:43.766 "blocks": 22528, 00:16:43.766 "percent": 34 00:16:43.766 } 00:16:43.766 }, 00:16:43.766 "base_bdevs_list": [ 00:16:43.766 { 00:16:43.766 "name": "spare", 00:16:43.766 "uuid": "7eb1f6a2-d639-5e2f-a975-26d24d6a86b5", 00:16:43.766 "is_configured": true, 00:16:43.766 "data_offset": 0, 00:16:43.766 "data_size": 65536 00:16:43.766 }, 00:16:43.766 { 00:16:43.766 "name": "BaseBdev2", 00:16:43.766 "uuid": "15920bf9-4b6e-5031-a6a0-3cb79b84c0a4", 00:16:43.766 "is_configured": true, 00:16:43.766 "data_offset": 0, 00:16:43.766 "data_size": 65536 00:16:43.766 } 00:16:43.766 ] 00:16:43.766 }' 00:16:43.766 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.766 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.766 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.766 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.766 14:16:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.144 "name": "raid_bdev1", 00:16:45.144 "uuid": "7533cfbd-3549-4914-b932-e0ca1915ef4b", 00:16:45.144 "strip_size_kb": 0, 00:16:45.144 "state": "online", 00:16:45.144 "raid_level": "raid1", 00:16:45.144 "superblock": false, 00:16:45.144 "num_base_bdevs": 2, 00:16:45.144 "num_base_bdevs_discovered": 2, 00:16:45.144 "num_base_bdevs_operational": 2, 00:16:45.144 "process": { 00:16:45.144 "type": "rebuild", 00:16:45.144 "target": "spare", 00:16:45.144 "progress": { 00:16:45.144 "blocks": 45056, 00:16:45.144 "percent": 68 00:16:45.144 } 00:16:45.144 }, 00:16:45.144 "base_bdevs_list": [ 00:16:45.144 { 00:16:45.144 "name": "spare", 00:16:45.144 "uuid": "7eb1f6a2-d639-5e2f-a975-26d24d6a86b5", 00:16:45.144 "is_configured": true, 00:16:45.144 "data_offset": 0, 00:16:45.144 "data_size": 65536 00:16:45.144 }, 00:16:45.144 { 00:16:45.144 "name": "BaseBdev2", 00:16:45.144 "uuid": "15920bf9-4b6e-5031-a6a0-3cb79b84c0a4", 00:16:45.144 "is_configured": true, 00:16:45.144 "data_offset": 0, 00:16:45.144 "data_size": 65536 00:16:45.144 } 00:16:45.144 ] 00:16:45.144 }' 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.144 14:16:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.710 [2024-11-27 14:16:16.651930] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:45.710 [2024-11-27 14:16:16.652105] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:45.710 [2024-11-27 14:16:16.652190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.969 "name": "raid_bdev1", 00:16:45.969 "uuid": "7533cfbd-3549-4914-b932-e0ca1915ef4b", 00:16:45.969 "strip_size_kb": 0, 00:16:45.969 "state": "online", 00:16:45.969 "raid_level": "raid1", 00:16:45.969 "superblock": false, 00:16:45.969 "num_base_bdevs": 2, 00:16:45.969 "num_base_bdevs_discovered": 2, 00:16:45.969 "num_base_bdevs_operational": 2, 00:16:45.969 "base_bdevs_list": [ 00:16:45.969 { 00:16:45.969 "name": "spare", 00:16:45.969 "uuid": "7eb1f6a2-d639-5e2f-a975-26d24d6a86b5", 00:16:45.969 "is_configured": true, 00:16:45.969 "data_offset": 0, 00:16:45.969 "data_size": 65536 00:16:45.969 }, 00:16:45.969 { 00:16:45.969 "name": "BaseBdev2", 00:16:45.969 "uuid": "15920bf9-4b6e-5031-a6a0-3cb79b84c0a4", 00:16:45.969 "is_configured": true, 00:16:45.969 "data_offset": 0, 00:16:45.969 "data_size": 65536 00:16:45.969 } 00:16:45.969 ] 00:16:45.969 }' 00:16:45.969 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.228 14:16:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.228 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.228 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.228 "name": "raid_bdev1", 00:16:46.228 "uuid": "7533cfbd-3549-4914-b932-e0ca1915ef4b", 00:16:46.228 "strip_size_kb": 0, 00:16:46.228 "state": "online", 00:16:46.228 "raid_level": "raid1", 00:16:46.228 "superblock": false, 00:16:46.228 "num_base_bdevs": 2, 00:16:46.228 "num_base_bdevs_discovered": 2, 00:16:46.228 "num_base_bdevs_operational": 2, 00:16:46.229 "base_bdevs_list": [ 00:16:46.229 { 00:16:46.229 "name": "spare", 00:16:46.229 "uuid": "7eb1f6a2-d639-5e2f-a975-26d24d6a86b5", 00:16:46.229 "is_configured": true, 00:16:46.229 "data_offset": 0, 00:16:46.229 "data_size": 65536 00:16:46.229 }, 00:16:46.229 { 00:16:46.229 "name": "BaseBdev2", 00:16:46.229 "uuid": "15920bf9-4b6e-5031-a6a0-3cb79b84c0a4", 00:16:46.229 "is_configured": true, 00:16:46.229 "data_offset": 0, 00:16:46.229 "data_size": 65536 00:16:46.229 } 00:16:46.229 ] 00:16:46.229 }' 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.229 "name": "raid_bdev1", 00:16:46.229 "uuid": "7533cfbd-3549-4914-b932-e0ca1915ef4b", 00:16:46.229 "strip_size_kb": 0, 00:16:46.229 "state": "online", 00:16:46.229 "raid_level": "raid1", 00:16:46.229 "superblock": false, 00:16:46.229 "num_base_bdevs": 2, 00:16:46.229 "num_base_bdevs_discovered": 2, 00:16:46.229 "num_base_bdevs_operational": 2, 00:16:46.229 "base_bdevs_list": [ 00:16:46.229 { 00:16:46.229 "name": "spare", 00:16:46.229 "uuid": "7eb1f6a2-d639-5e2f-a975-26d24d6a86b5", 00:16:46.229 "is_configured": true, 00:16:46.229 "data_offset": 0, 00:16:46.229 "data_size": 65536 00:16:46.229 }, 00:16:46.229 { 00:16:46.229 "name": "BaseBdev2", 00:16:46.229 "uuid": "15920bf9-4b6e-5031-a6a0-3cb79b84c0a4", 00:16:46.229 "is_configured": true, 00:16:46.229 "data_offset": 0, 00:16:46.229 "data_size": 65536 00:16:46.229 } 00:16:46.229 ] 00:16:46.229 }' 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.229 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.819 [2024-11-27 14:16:17.591514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:46.819 [2024-11-27 14:16:17.591545] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.819 [2024-11-27 14:16:17.591638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.819 [2024-11-27 14:16:17.591708] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.819 [2024-11-27 14:16:17.591719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:46.819 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:47.078 /dev/nbd0 00:16:47.078 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:47.078 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:47.078 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:47.078 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:47.078 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:47.078 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:47.078 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.079 1+0 records in 00:16:47.079 1+0 records out 00:16:47.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323274 s, 12.7 MB/s 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:47.079 14:16:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:47.337 /dev/nbd1 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:47.337 1+0 records in 00:16:47.337 1+0 records out 00:16:47.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491268 s, 8.3 MB/s 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:47.337 14:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:47.596 14:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:47.596 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:47.596 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:47.596 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:47.596 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:47.596 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.596 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:47.854 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:47.854 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:47.854 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:47.854 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:47.854 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:47.854 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:47.854 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:47.854 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:47.854 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:47.855 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75549 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75549 ']' 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75549 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75549 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75549' 00:16:48.114 killing process with pid 75549 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75549 00:16:48.114 Received shutdown signal, test time was about 60.000000 seconds 00:16:48.114 00:16:48.114 Latency(us) 00:16:48.114 [2024-11-27T14:16:19.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.114 [2024-11-27T14:16:19.070Z] =================================================================================================================== 00:16:48.114 [2024-11-27T14:16:19.070Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:48.114 [2024-11-27 14:16:18.922066] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.114 14:16:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75549 00:16:48.373 [2024-11-27 14:16:19.230826] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:49.749 00:16:49.749 real 0m15.703s 00:16:49.749 user 0m17.913s 00:16:49.749 sys 0m3.096s 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.749 ************************************ 00:16:49.749 END TEST raid_rebuild_test 00:16:49.749 ************************************ 00:16:49.749 14:16:20 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:16:49.749 14:16:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:49.749 14:16:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.749 14:16:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.749 ************************************ 00:16:49.749 START TEST raid_rebuild_test_sb 00:16:49.749 ************************************ 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75973 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75973 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75973 ']' 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.749 14:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.749 [2024-11-27 14:16:20.570531] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:16:49.749 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:49.749 Zero copy mechanism will not be used. 00:16:49.749 [2024-11-27 14:16:20.570746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75973 ] 00:16:50.008 [2024-11-27 14:16:20.745233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.008 [2024-11-27 14:16:20.865296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.267 [2024-11-27 14:16:21.073136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.267 [2024-11-27 14:16:21.073198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.528 BaseBdev1_malloc 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.528 [2024-11-27 14:16:21.465800] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:50.528 [2024-11-27 14:16:21.465860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.528 [2024-11-27 14:16:21.465901] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:50.528 [2024-11-27 14:16:21.465912] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.528 [2024-11-27 14:16:21.467981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.528 [2024-11-27 14:16:21.468098] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:50.528 BaseBdev1 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.528 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.788 BaseBdev2_malloc 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.788 [2024-11-27 14:16:21.520435] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:50.788 [2024-11-27 14:16:21.520494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.788 [2024-11-27 14:16:21.520518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:50.788 [2024-11-27 14:16:21.520529] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.788 [2024-11-27 14:16:21.522488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.788 [2024-11-27 14:16:21.522527] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:50.788 BaseBdev2 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.788 spare_malloc 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.788 spare_delay 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.788 [2024-11-27 14:16:21.599363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.788 [2024-11-27 14:16:21.599418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.788 [2024-11-27 14:16:21.599437] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:50.788 [2024-11-27 14:16:21.599448] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.788 [2024-11-27 14:16:21.601583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.788 [2024-11-27 14:16:21.601691] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.788 spare 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.788 [2024-11-27 14:16:21.611401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.788 [2024-11-27 14:16:21.613281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.788 [2024-11-27 14:16:21.613448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:50.788 [2024-11-27 14:16:21.613464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:50.788 [2024-11-27 14:16:21.613735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:50.788 [2024-11-27 14:16:21.613942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:50.788 [2024-11-27 14:16:21.613951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:50.788 [2024-11-27 14:16:21.614105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.788 "name": "raid_bdev1", 00:16:50.788 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:16:50.788 "strip_size_kb": 0, 00:16:50.788 "state": "online", 00:16:50.788 "raid_level": "raid1", 00:16:50.788 "superblock": true, 00:16:50.788 "num_base_bdevs": 2, 00:16:50.788 "num_base_bdevs_discovered": 2, 00:16:50.788 "num_base_bdevs_operational": 2, 00:16:50.788 "base_bdevs_list": [ 00:16:50.788 { 00:16:50.788 "name": "BaseBdev1", 00:16:50.788 "uuid": "72eb3ba1-4019-56ea-9e70-e3bee8a9e71c", 00:16:50.788 "is_configured": true, 00:16:50.788 "data_offset": 2048, 00:16:50.788 "data_size": 63488 00:16:50.788 }, 00:16:50.788 { 00:16:50.788 "name": "BaseBdev2", 00:16:50.788 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:16:50.788 "is_configured": true, 00:16:50.788 "data_offset": 2048, 00:16:50.788 "data_size": 63488 00:16:50.788 } 00:16:50.788 ] 00:16:50.788 }' 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.788 14:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:51.377 [2024-11-27 14:16:22.095132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:51.377 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:51.378 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:51.378 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:51.378 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:51.637 [2024-11-27 14:16:22.382180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:51.637 /dev/nbd0 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:51.637 1+0 records in 00:16:51.637 1+0 records out 00:16:51.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454707 s, 9.0 MB/s 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:51.637 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:51.638 14:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:55.829 63488+0 records in 00:16:55.829 63488+0 records out 00:16:55.829 32505856 bytes (33 MB, 31 MiB) copied, 4.30942 s, 7.5 MB/s 00:16:55.829 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:55.829 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:55.829 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:55.829 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:55.829 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:55.829 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.829 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:56.089 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:56.089 [2024-11-27 14:16:26.984500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.089 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:56.089 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:56.089 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.090 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.090 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:56.090 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:56.090 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.090 14:16:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:56.090 14:16:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.090 14:16:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.090 [2024-11-27 14:16:27.000550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.090 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.348 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.349 "name": "raid_bdev1", 00:16:56.349 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:16:56.349 "strip_size_kb": 0, 00:16:56.349 "state": "online", 00:16:56.349 "raid_level": "raid1", 00:16:56.349 "superblock": true, 00:16:56.349 "num_base_bdevs": 2, 00:16:56.349 "num_base_bdevs_discovered": 1, 00:16:56.349 "num_base_bdevs_operational": 1, 00:16:56.349 "base_bdevs_list": [ 00:16:56.349 { 00:16:56.349 "name": null, 00:16:56.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.349 "is_configured": false, 00:16:56.349 "data_offset": 0, 00:16:56.349 "data_size": 63488 00:16:56.349 }, 00:16:56.349 { 00:16:56.349 "name": "BaseBdev2", 00:16:56.349 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:16:56.349 "is_configured": true, 00:16:56.349 "data_offset": 2048, 00:16:56.349 "data_size": 63488 00:16:56.349 } 00:16:56.349 ] 00:16:56.349 }' 00:16:56.349 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.349 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.607 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:56.607 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.608 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.608 [2024-11-27 14:16:27.499753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.608 [2024-11-27 14:16:27.517424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:16:56.608 [2024-11-27 14:16:27.519391] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:56.608 14:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.608 14:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.987 "name": "raid_bdev1", 00:16:57.987 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:16:57.987 "strip_size_kb": 0, 00:16:57.987 "state": "online", 00:16:57.987 "raid_level": "raid1", 00:16:57.987 "superblock": true, 00:16:57.987 "num_base_bdevs": 2, 00:16:57.987 "num_base_bdevs_discovered": 2, 00:16:57.987 "num_base_bdevs_operational": 2, 00:16:57.987 "process": { 00:16:57.987 "type": "rebuild", 00:16:57.987 "target": "spare", 00:16:57.987 "progress": { 00:16:57.987 "blocks": 20480, 00:16:57.987 "percent": 32 00:16:57.987 } 00:16:57.987 }, 00:16:57.987 "base_bdevs_list": [ 00:16:57.987 { 00:16:57.987 "name": "spare", 00:16:57.987 "uuid": "b81236ea-a935-5608-94fd-c62dc861dc21", 00:16:57.987 "is_configured": true, 00:16:57.987 "data_offset": 2048, 00:16:57.987 "data_size": 63488 00:16:57.987 }, 00:16:57.987 { 00:16:57.987 "name": "BaseBdev2", 00:16:57.987 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:16:57.987 "is_configured": true, 00:16:57.987 "data_offset": 2048, 00:16:57.987 "data_size": 63488 00:16:57.987 } 00:16:57.987 ] 00:16:57.987 }' 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.987 [2024-11-27 14:16:28.662974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.987 [2024-11-27 14:16:28.725333] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.987 [2024-11-27 14:16:28.725406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.987 [2024-11-27 14:16:28.725422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.987 [2024-11-27 14:16:28.725431] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.987 "name": "raid_bdev1", 00:16:57.987 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:16:57.987 "strip_size_kb": 0, 00:16:57.987 "state": "online", 00:16:57.987 "raid_level": "raid1", 00:16:57.987 "superblock": true, 00:16:57.987 "num_base_bdevs": 2, 00:16:57.987 "num_base_bdevs_discovered": 1, 00:16:57.987 "num_base_bdevs_operational": 1, 00:16:57.987 "base_bdevs_list": [ 00:16:57.987 { 00:16:57.987 "name": null, 00:16:57.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.987 "is_configured": false, 00:16:57.987 "data_offset": 0, 00:16:57.987 "data_size": 63488 00:16:57.987 }, 00:16:57.987 { 00:16:57.987 "name": "BaseBdev2", 00:16:57.987 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:16:57.987 "is_configured": true, 00:16:57.987 "data_offset": 2048, 00:16:57.987 "data_size": 63488 00:16:57.987 } 00:16:57.987 ] 00:16:57.987 }' 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.987 14:16:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.556 "name": "raid_bdev1", 00:16:58.556 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:16:58.556 "strip_size_kb": 0, 00:16:58.556 "state": "online", 00:16:58.556 "raid_level": "raid1", 00:16:58.556 "superblock": true, 00:16:58.556 "num_base_bdevs": 2, 00:16:58.556 "num_base_bdevs_discovered": 1, 00:16:58.556 "num_base_bdevs_operational": 1, 00:16:58.556 "base_bdevs_list": [ 00:16:58.556 { 00:16:58.556 "name": null, 00:16:58.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.556 "is_configured": false, 00:16:58.556 "data_offset": 0, 00:16:58.556 "data_size": 63488 00:16:58.556 }, 00:16:58.556 { 00:16:58.556 "name": "BaseBdev2", 00:16:58.556 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:16:58.556 "is_configured": true, 00:16:58.556 "data_offset": 2048, 00:16:58.556 "data_size": 63488 00:16:58.556 } 00:16:58.556 ] 00:16:58.556 }' 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.556 [2024-11-27 14:16:29.381479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:58.556 [2024-11-27 14:16:29.398946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.556 14:16:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:58.556 [2024-11-27 14:16:29.400842] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:59.490 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.490 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.490 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.490 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.490 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.491 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.491 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.491 14:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.491 14:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.491 14:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.750 "name": "raid_bdev1", 00:16:59.750 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:16:59.750 "strip_size_kb": 0, 00:16:59.750 "state": "online", 00:16:59.750 "raid_level": "raid1", 00:16:59.750 "superblock": true, 00:16:59.750 "num_base_bdevs": 2, 00:16:59.750 "num_base_bdevs_discovered": 2, 00:16:59.750 "num_base_bdevs_operational": 2, 00:16:59.750 "process": { 00:16:59.750 "type": "rebuild", 00:16:59.750 "target": "spare", 00:16:59.750 "progress": { 00:16:59.750 "blocks": 20480, 00:16:59.750 "percent": 32 00:16:59.750 } 00:16:59.750 }, 00:16:59.750 "base_bdevs_list": [ 00:16:59.750 { 00:16:59.750 "name": "spare", 00:16:59.750 "uuid": "b81236ea-a935-5608-94fd-c62dc861dc21", 00:16:59.750 "is_configured": true, 00:16:59.750 "data_offset": 2048, 00:16:59.750 "data_size": 63488 00:16:59.750 }, 00:16:59.750 { 00:16:59.750 "name": "BaseBdev2", 00:16:59.750 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:16:59.750 "is_configured": true, 00:16:59.750 "data_offset": 2048, 00:16:59.750 "data_size": 63488 00:16:59.750 } 00:16:59.750 ] 00:16:59.750 }' 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:59.750 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=398 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.750 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.750 "name": "raid_bdev1", 00:16:59.750 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:16:59.750 "strip_size_kb": 0, 00:16:59.750 "state": "online", 00:16:59.750 "raid_level": "raid1", 00:16:59.750 "superblock": true, 00:16:59.750 "num_base_bdevs": 2, 00:16:59.750 "num_base_bdevs_discovered": 2, 00:16:59.750 "num_base_bdevs_operational": 2, 00:16:59.750 "process": { 00:16:59.750 "type": "rebuild", 00:16:59.750 "target": "spare", 00:16:59.750 "progress": { 00:16:59.750 "blocks": 22528, 00:16:59.750 "percent": 35 00:16:59.750 } 00:16:59.750 }, 00:16:59.750 "base_bdevs_list": [ 00:16:59.750 { 00:16:59.750 "name": "spare", 00:16:59.750 "uuid": "b81236ea-a935-5608-94fd-c62dc861dc21", 00:16:59.750 "is_configured": true, 00:16:59.750 "data_offset": 2048, 00:16:59.750 "data_size": 63488 00:16:59.750 }, 00:16:59.750 { 00:16:59.750 "name": "BaseBdev2", 00:16:59.750 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:16:59.750 "is_configured": true, 00:16:59.750 "data_offset": 2048, 00:16:59.750 "data_size": 63488 00:16:59.750 } 00:16:59.750 ] 00:16:59.750 }' 00:16:59.751 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.751 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.751 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.751 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.751 14:16:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.128 "name": "raid_bdev1", 00:17:01.128 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:01.128 "strip_size_kb": 0, 00:17:01.128 "state": "online", 00:17:01.128 "raid_level": "raid1", 00:17:01.128 "superblock": true, 00:17:01.128 "num_base_bdevs": 2, 00:17:01.128 "num_base_bdevs_discovered": 2, 00:17:01.128 "num_base_bdevs_operational": 2, 00:17:01.128 "process": { 00:17:01.128 "type": "rebuild", 00:17:01.128 "target": "spare", 00:17:01.128 "progress": { 00:17:01.128 "blocks": 45056, 00:17:01.128 "percent": 70 00:17:01.128 } 00:17:01.128 }, 00:17:01.128 "base_bdevs_list": [ 00:17:01.128 { 00:17:01.128 "name": "spare", 00:17:01.128 "uuid": "b81236ea-a935-5608-94fd-c62dc861dc21", 00:17:01.128 "is_configured": true, 00:17:01.128 "data_offset": 2048, 00:17:01.128 "data_size": 63488 00:17:01.128 }, 00:17:01.128 { 00:17:01.128 "name": "BaseBdev2", 00:17:01.128 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:01.128 "is_configured": true, 00:17:01.128 "data_offset": 2048, 00:17:01.128 "data_size": 63488 00:17:01.128 } 00:17:01.128 ] 00:17:01.128 }' 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.128 14:16:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.696 [2024-11-27 14:16:32.515496] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:01.696 [2024-11-27 14:16:32.515581] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:01.696 [2024-11-27 14:16:32.515707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.956 "name": "raid_bdev1", 00:17:01.956 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:01.956 "strip_size_kb": 0, 00:17:01.956 "state": "online", 00:17:01.956 "raid_level": "raid1", 00:17:01.956 "superblock": true, 00:17:01.956 "num_base_bdevs": 2, 00:17:01.956 "num_base_bdevs_discovered": 2, 00:17:01.956 "num_base_bdevs_operational": 2, 00:17:01.956 "base_bdevs_list": [ 00:17:01.956 { 00:17:01.956 "name": "spare", 00:17:01.956 "uuid": "b81236ea-a935-5608-94fd-c62dc861dc21", 00:17:01.956 "is_configured": true, 00:17:01.956 "data_offset": 2048, 00:17:01.956 "data_size": 63488 00:17:01.956 }, 00:17:01.956 { 00:17:01.956 "name": "BaseBdev2", 00:17:01.956 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:01.956 "is_configured": true, 00:17:01.956 "data_offset": 2048, 00:17:01.956 "data_size": 63488 00:17:01.956 } 00:17:01.956 ] 00:17:01.956 }' 00:17:01.956 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.216 14:16:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.216 "name": "raid_bdev1", 00:17:02.216 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:02.216 "strip_size_kb": 0, 00:17:02.216 "state": "online", 00:17:02.216 "raid_level": "raid1", 00:17:02.216 "superblock": true, 00:17:02.216 "num_base_bdevs": 2, 00:17:02.216 "num_base_bdevs_discovered": 2, 00:17:02.216 "num_base_bdevs_operational": 2, 00:17:02.216 "base_bdevs_list": [ 00:17:02.216 { 00:17:02.216 "name": "spare", 00:17:02.216 "uuid": "b81236ea-a935-5608-94fd-c62dc861dc21", 00:17:02.216 "is_configured": true, 00:17:02.216 "data_offset": 2048, 00:17:02.216 "data_size": 63488 00:17:02.216 }, 00:17:02.216 { 00:17:02.216 "name": "BaseBdev2", 00:17:02.216 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:02.216 "is_configured": true, 00:17:02.216 "data_offset": 2048, 00:17:02.216 "data_size": 63488 00:17:02.216 } 00:17:02.216 ] 00:17:02.216 }' 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.216 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.217 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.217 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.217 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.217 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.476 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.476 "name": "raid_bdev1", 00:17:02.476 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:02.476 "strip_size_kb": 0, 00:17:02.476 "state": "online", 00:17:02.476 "raid_level": "raid1", 00:17:02.476 "superblock": true, 00:17:02.477 "num_base_bdevs": 2, 00:17:02.477 "num_base_bdevs_discovered": 2, 00:17:02.477 "num_base_bdevs_operational": 2, 00:17:02.477 "base_bdevs_list": [ 00:17:02.477 { 00:17:02.477 "name": "spare", 00:17:02.477 "uuid": "b81236ea-a935-5608-94fd-c62dc861dc21", 00:17:02.477 "is_configured": true, 00:17:02.477 "data_offset": 2048, 00:17:02.477 "data_size": 63488 00:17:02.477 }, 00:17:02.477 { 00:17:02.477 "name": "BaseBdev2", 00:17:02.477 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:02.477 "is_configured": true, 00:17:02.477 "data_offset": 2048, 00:17:02.477 "data_size": 63488 00:17:02.477 } 00:17:02.477 ] 00:17:02.477 }' 00:17:02.477 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.477 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.736 [2024-11-27 14:16:33.583553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.736 [2024-11-27 14:16:33.583587] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.736 [2024-11-27 14:16:33.583711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.736 [2024-11-27 14:16:33.583788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.736 [2024-11-27 14:16:33.583801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:02.736 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.737 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:02.737 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:02.737 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.737 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:02.737 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:02.737 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:02.737 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.737 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:02.997 /dev/nbd0 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.997 1+0 records in 00:17:02.997 1+0 records out 00:17:02.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517956 s, 7.9 MB/s 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.997 14:16:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:03.257 /dev/nbd1 00:17:03.257 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:03.257 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:03.257 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:03.257 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:03.257 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:03.257 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:03.257 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:03.257 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:03.258 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:03.258 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:03.258 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.518 1+0 records in 00:17:03.518 1+0 records out 00:17:03.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617308 s, 6.6 MB/s 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.518 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:03.779 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.779 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.779 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.779 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.779 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.779 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.779 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:03.779 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.779 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.779 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.039 [2024-11-27 14:16:34.895071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:04.039 [2024-11-27 14:16:34.895141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.039 [2024-11-27 14:16:34.895193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:04.039 [2024-11-27 14:16:34.895206] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.039 [2024-11-27 14:16:34.897895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.039 [2024-11-27 14:16:34.897935] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:04.039 [2024-11-27 14:16:34.898033] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:04.039 [2024-11-27 14:16:34.898109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.039 [2024-11-27 14:16:34.898301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.039 spare 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.039 14:16:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.298 [2024-11-27 14:16:34.998220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:04.298 [2024-11-27 14:16:34.998269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:04.298 [2024-11-27 14:16:34.998600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:17:04.298 [2024-11-27 14:16:34.998815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:04.298 [2024-11-27 14:16:34.998826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:04.298 [2024-11-27 14:16:34.999021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.298 "name": "raid_bdev1", 00:17:04.298 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:04.298 "strip_size_kb": 0, 00:17:04.298 "state": "online", 00:17:04.298 "raid_level": "raid1", 00:17:04.298 "superblock": true, 00:17:04.298 "num_base_bdevs": 2, 00:17:04.298 "num_base_bdevs_discovered": 2, 00:17:04.298 "num_base_bdevs_operational": 2, 00:17:04.298 "base_bdevs_list": [ 00:17:04.298 { 00:17:04.298 "name": "spare", 00:17:04.298 "uuid": "b81236ea-a935-5608-94fd-c62dc861dc21", 00:17:04.298 "is_configured": true, 00:17:04.298 "data_offset": 2048, 00:17:04.298 "data_size": 63488 00:17:04.298 }, 00:17:04.298 { 00:17:04.298 "name": "BaseBdev2", 00:17:04.298 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:04.298 "is_configured": true, 00:17:04.298 "data_offset": 2048, 00:17:04.298 "data_size": 63488 00:17:04.298 } 00:17:04.298 ] 00:17:04.298 }' 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.298 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.558 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.558 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.558 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.558 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.558 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.558 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.558 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.558 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.558 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.558 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.558 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.558 "name": "raid_bdev1", 00:17:04.558 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:04.558 "strip_size_kb": 0, 00:17:04.558 "state": "online", 00:17:04.558 "raid_level": "raid1", 00:17:04.558 "superblock": true, 00:17:04.558 "num_base_bdevs": 2, 00:17:04.558 "num_base_bdevs_discovered": 2, 00:17:04.558 "num_base_bdevs_operational": 2, 00:17:04.558 "base_bdevs_list": [ 00:17:04.558 { 00:17:04.558 "name": "spare", 00:17:04.558 "uuid": "b81236ea-a935-5608-94fd-c62dc861dc21", 00:17:04.558 "is_configured": true, 00:17:04.558 "data_offset": 2048, 00:17:04.558 "data_size": 63488 00:17:04.558 }, 00:17:04.558 { 00:17:04.558 "name": "BaseBdev2", 00:17:04.558 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:04.558 "is_configured": true, 00:17:04.558 "data_offset": 2048, 00:17:04.558 "data_size": 63488 00:17:04.558 } 00:17:04.558 ] 00:17:04.558 }' 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.817 [2024-11-27 14:16:35.645920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.817 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.818 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.818 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.818 "name": "raid_bdev1", 00:17:04.818 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:04.818 "strip_size_kb": 0, 00:17:04.818 "state": "online", 00:17:04.818 "raid_level": "raid1", 00:17:04.818 "superblock": true, 00:17:04.818 "num_base_bdevs": 2, 00:17:04.818 "num_base_bdevs_discovered": 1, 00:17:04.818 "num_base_bdevs_operational": 1, 00:17:04.818 "base_bdevs_list": [ 00:17:04.818 { 00:17:04.818 "name": null, 00:17:04.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.818 "is_configured": false, 00:17:04.818 "data_offset": 0, 00:17:04.818 "data_size": 63488 00:17:04.818 }, 00:17:04.818 { 00:17:04.818 "name": "BaseBdev2", 00:17:04.818 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:04.818 "is_configured": true, 00:17:04.818 "data_offset": 2048, 00:17:04.818 "data_size": 63488 00:17:04.818 } 00:17:04.818 ] 00:17:04.818 }' 00:17:04.818 14:16:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.818 14:16:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.103 14:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.103 14:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.103 14:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.361 [2024-11-27 14:16:36.057271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.361 [2024-11-27 14:16:36.057551] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:05.361 [2024-11-27 14:16:36.057620] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:05.361 [2024-11-27 14:16:36.057700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.361 [2024-11-27 14:16:36.074844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:17:05.361 14:16:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.361 14:16:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:05.361 [2024-11-27 14:16:36.076870] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.299 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.299 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.299 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.299 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.299 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.299 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.300 "name": "raid_bdev1", 00:17:06.300 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:06.300 "strip_size_kb": 0, 00:17:06.300 "state": "online", 00:17:06.300 "raid_level": "raid1", 00:17:06.300 "superblock": true, 00:17:06.300 "num_base_bdevs": 2, 00:17:06.300 "num_base_bdevs_discovered": 2, 00:17:06.300 "num_base_bdevs_operational": 2, 00:17:06.300 "process": { 00:17:06.300 "type": "rebuild", 00:17:06.300 "target": "spare", 00:17:06.300 "progress": { 00:17:06.300 "blocks": 20480, 00:17:06.300 "percent": 32 00:17:06.300 } 00:17:06.300 }, 00:17:06.300 "base_bdevs_list": [ 00:17:06.300 { 00:17:06.300 "name": "spare", 00:17:06.300 "uuid": "b81236ea-a935-5608-94fd-c62dc861dc21", 00:17:06.300 "is_configured": true, 00:17:06.300 "data_offset": 2048, 00:17:06.300 "data_size": 63488 00:17:06.300 }, 00:17:06.300 { 00:17:06.300 "name": "BaseBdev2", 00:17:06.300 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:06.300 "is_configured": true, 00:17:06.300 "data_offset": 2048, 00:17:06.300 "data_size": 63488 00:17:06.300 } 00:17:06.300 ] 00:17:06.300 }' 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.300 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.300 [2024-11-27 14:16:37.248447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.560 [2024-11-27 14:16:37.282779] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:06.560 [2024-11-27 14:16:37.282976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.560 [2024-11-27 14:16:37.283020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.560 [2024-11-27 14:16:37.283062] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.560 "name": "raid_bdev1", 00:17:06.560 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:06.560 "strip_size_kb": 0, 00:17:06.560 "state": "online", 00:17:06.560 "raid_level": "raid1", 00:17:06.560 "superblock": true, 00:17:06.560 "num_base_bdevs": 2, 00:17:06.560 "num_base_bdevs_discovered": 1, 00:17:06.560 "num_base_bdevs_operational": 1, 00:17:06.560 "base_bdevs_list": [ 00:17:06.560 { 00:17:06.560 "name": null, 00:17:06.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.560 "is_configured": false, 00:17:06.560 "data_offset": 0, 00:17:06.560 "data_size": 63488 00:17:06.560 }, 00:17:06.560 { 00:17:06.560 "name": "BaseBdev2", 00:17:06.560 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:06.560 "is_configured": true, 00:17:06.560 "data_offset": 2048, 00:17:06.560 "data_size": 63488 00:17:06.560 } 00:17:06.560 ] 00:17:06.560 }' 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.560 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.132 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:07.132 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.132 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.132 [2024-11-27 14:16:37.795245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:07.133 [2024-11-27 14:16:37.795405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.133 [2024-11-27 14:16:37.795436] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:07.133 [2024-11-27 14:16:37.795449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.133 [2024-11-27 14:16:37.795970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.133 [2024-11-27 14:16:37.795996] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:07.133 [2024-11-27 14:16:37.796104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:07.133 [2024-11-27 14:16:37.796135] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:07.133 [2024-11-27 14:16:37.796148] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:07.133 [2024-11-27 14:16:37.796176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.133 [2024-11-27 14:16:37.814437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:17:07.133 spare 00:17:07.133 14:16:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.133 14:16:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:07.133 [2024-11-27 14:16:37.816550] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.071 "name": "raid_bdev1", 00:17:08.071 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:08.071 "strip_size_kb": 0, 00:17:08.071 "state": "online", 00:17:08.071 "raid_level": "raid1", 00:17:08.071 "superblock": true, 00:17:08.071 "num_base_bdevs": 2, 00:17:08.071 "num_base_bdevs_discovered": 2, 00:17:08.071 "num_base_bdevs_operational": 2, 00:17:08.071 "process": { 00:17:08.071 "type": "rebuild", 00:17:08.071 "target": "spare", 00:17:08.071 "progress": { 00:17:08.071 "blocks": 20480, 00:17:08.071 "percent": 32 00:17:08.071 } 00:17:08.071 }, 00:17:08.071 "base_bdevs_list": [ 00:17:08.071 { 00:17:08.071 "name": "spare", 00:17:08.071 "uuid": "b81236ea-a935-5608-94fd-c62dc861dc21", 00:17:08.071 "is_configured": true, 00:17:08.071 "data_offset": 2048, 00:17:08.071 "data_size": 63488 00:17:08.071 }, 00:17:08.071 { 00:17:08.071 "name": "BaseBdev2", 00:17:08.071 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:08.071 "is_configured": true, 00:17:08.071 "data_offset": 2048, 00:17:08.071 "data_size": 63488 00:17:08.071 } 00:17:08.071 ] 00:17:08.071 }' 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.071 14:16:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.071 [2024-11-27 14:16:38.980067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.071 [2024-11-27 14:16:39.022247] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.071 [2024-11-27 14:16:39.022364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.071 [2024-11-27 14:16:39.022408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.071 [2024-11-27 14:16:39.022439] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.331 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.331 "name": "raid_bdev1", 00:17:08.331 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:08.331 "strip_size_kb": 0, 00:17:08.331 "state": "online", 00:17:08.331 "raid_level": "raid1", 00:17:08.331 "superblock": true, 00:17:08.331 "num_base_bdevs": 2, 00:17:08.331 "num_base_bdevs_discovered": 1, 00:17:08.331 "num_base_bdevs_operational": 1, 00:17:08.331 "base_bdevs_list": [ 00:17:08.331 { 00:17:08.331 "name": null, 00:17:08.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.331 "is_configured": false, 00:17:08.331 "data_offset": 0, 00:17:08.331 "data_size": 63488 00:17:08.331 }, 00:17:08.331 { 00:17:08.331 "name": "BaseBdev2", 00:17:08.331 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:08.331 "is_configured": true, 00:17:08.331 "data_offset": 2048, 00:17:08.331 "data_size": 63488 00:17:08.332 } 00:17:08.332 ] 00:17:08.332 }' 00:17:08.332 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.332 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.900 "name": "raid_bdev1", 00:17:08.900 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:08.900 "strip_size_kb": 0, 00:17:08.900 "state": "online", 00:17:08.900 "raid_level": "raid1", 00:17:08.900 "superblock": true, 00:17:08.900 "num_base_bdevs": 2, 00:17:08.900 "num_base_bdevs_discovered": 1, 00:17:08.900 "num_base_bdevs_operational": 1, 00:17:08.900 "base_bdevs_list": [ 00:17:08.900 { 00:17:08.900 "name": null, 00:17:08.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.900 "is_configured": false, 00:17:08.900 "data_offset": 0, 00:17:08.900 "data_size": 63488 00:17:08.900 }, 00:17:08.900 { 00:17:08.900 "name": "BaseBdev2", 00:17:08.900 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:08.900 "is_configured": true, 00:17:08.900 "data_offset": 2048, 00:17:08.900 "data_size": 63488 00:17:08.900 } 00:17:08.900 ] 00:17:08.900 }' 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.900 [2024-11-27 14:16:39.717740] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:08.900 [2024-11-27 14:16:39.717808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.900 [2024-11-27 14:16:39.717841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:08.900 [2024-11-27 14:16:39.717863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.900 [2024-11-27 14:16:39.718365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.900 [2024-11-27 14:16:39.718391] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:08.900 [2024-11-27 14:16:39.718479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:08.900 [2024-11-27 14:16:39.718494] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:08.900 [2024-11-27 14:16:39.718507] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:08.900 [2024-11-27 14:16:39.718519] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:08.900 BaseBdev1 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.900 14:16:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.839 "name": "raid_bdev1", 00:17:09.839 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:09.839 "strip_size_kb": 0, 00:17:09.839 "state": "online", 00:17:09.839 "raid_level": "raid1", 00:17:09.839 "superblock": true, 00:17:09.839 "num_base_bdevs": 2, 00:17:09.839 "num_base_bdevs_discovered": 1, 00:17:09.839 "num_base_bdevs_operational": 1, 00:17:09.839 "base_bdevs_list": [ 00:17:09.839 { 00:17:09.839 "name": null, 00:17:09.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.839 "is_configured": false, 00:17:09.839 "data_offset": 0, 00:17:09.839 "data_size": 63488 00:17:09.839 }, 00:17:09.839 { 00:17:09.839 "name": "BaseBdev2", 00:17:09.839 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:09.839 "is_configured": true, 00:17:09.839 "data_offset": 2048, 00:17:09.839 "data_size": 63488 00:17:09.839 } 00:17:09.839 ] 00:17:09.839 }' 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.839 14:16:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.408 "name": "raid_bdev1", 00:17:10.408 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:10.408 "strip_size_kb": 0, 00:17:10.408 "state": "online", 00:17:10.408 "raid_level": "raid1", 00:17:10.408 "superblock": true, 00:17:10.408 "num_base_bdevs": 2, 00:17:10.408 "num_base_bdevs_discovered": 1, 00:17:10.408 "num_base_bdevs_operational": 1, 00:17:10.408 "base_bdevs_list": [ 00:17:10.408 { 00:17:10.408 "name": null, 00:17:10.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.408 "is_configured": false, 00:17:10.408 "data_offset": 0, 00:17:10.408 "data_size": 63488 00:17:10.408 }, 00:17:10.408 { 00:17:10.408 "name": "BaseBdev2", 00:17:10.408 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:10.408 "is_configured": true, 00:17:10.408 "data_offset": 2048, 00:17:10.408 "data_size": 63488 00:17:10.408 } 00:17:10.408 ] 00:17:10.408 }' 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.408 [2024-11-27 14:16:41.335062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.408 [2024-11-27 14:16:41.335253] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:10.408 [2024-11-27 14:16:41.335273] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:10.408 request: 00:17:10.408 { 00:17:10.408 "base_bdev": "BaseBdev1", 00:17:10.408 "raid_bdev": "raid_bdev1", 00:17:10.408 "method": "bdev_raid_add_base_bdev", 00:17:10.408 "req_id": 1 00:17:10.408 } 00:17:10.408 Got JSON-RPC error response 00:17:10.408 response: 00:17:10.408 { 00:17:10.408 "code": -22, 00:17:10.408 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:10.408 } 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.408 14:16:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:11.796 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:11.796 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.796 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.796 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.796 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.796 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:11.796 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.796 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.796 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.796 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.796 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.797 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.797 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.797 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.797 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.797 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.797 "name": "raid_bdev1", 00:17:11.797 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:11.797 "strip_size_kb": 0, 00:17:11.797 "state": "online", 00:17:11.797 "raid_level": "raid1", 00:17:11.797 "superblock": true, 00:17:11.797 "num_base_bdevs": 2, 00:17:11.797 "num_base_bdevs_discovered": 1, 00:17:11.797 "num_base_bdevs_operational": 1, 00:17:11.797 "base_bdevs_list": [ 00:17:11.797 { 00:17:11.797 "name": null, 00:17:11.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.797 "is_configured": false, 00:17:11.797 "data_offset": 0, 00:17:11.797 "data_size": 63488 00:17:11.797 }, 00:17:11.797 { 00:17:11.797 "name": "BaseBdev2", 00:17:11.797 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:11.797 "is_configured": true, 00:17:11.797 "data_offset": 2048, 00:17:11.797 "data_size": 63488 00:17:11.797 } 00:17:11.797 ] 00:17:11.797 }' 00:17:11.797 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.797 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.056 "name": "raid_bdev1", 00:17:12.056 "uuid": "655be5a2-0972-4ee7-a041-bcd290b3471e", 00:17:12.056 "strip_size_kb": 0, 00:17:12.056 "state": "online", 00:17:12.056 "raid_level": "raid1", 00:17:12.056 "superblock": true, 00:17:12.056 "num_base_bdevs": 2, 00:17:12.056 "num_base_bdevs_discovered": 1, 00:17:12.056 "num_base_bdevs_operational": 1, 00:17:12.056 "base_bdevs_list": [ 00:17:12.056 { 00:17:12.056 "name": null, 00:17:12.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.056 "is_configured": false, 00:17:12.056 "data_offset": 0, 00:17:12.056 "data_size": 63488 00:17:12.056 }, 00:17:12.056 { 00:17:12.056 "name": "BaseBdev2", 00:17:12.056 "uuid": "b8dd2939-d4f2-59c6-8759-d2c85c5a5969", 00:17:12.056 "is_configured": true, 00:17:12.056 "data_offset": 2048, 00:17:12.056 "data_size": 63488 00:17:12.056 } 00:17:12.056 ] 00:17:12.056 }' 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75973 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75973 ']' 00:17:12.056 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75973 00:17:12.057 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:12.057 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.057 14:16:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75973 00:17:12.316 killing process with pid 75973 00:17:12.316 Received shutdown signal, test time was about 60.000000 seconds 00:17:12.316 00:17:12.316 Latency(us) 00:17:12.316 [2024-11-27T14:16:43.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.316 [2024-11-27T14:16:43.272Z] =================================================================================================================== 00:17:12.316 [2024-11-27T14:16:43.272Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:12.316 14:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.316 14:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.316 14:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75973' 00:17:12.316 14:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75973 00:17:12.316 [2024-11-27 14:16:43.010879] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:12.316 [2024-11-27 14:16:43.011027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.316 14:16:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75973 00:17:12.316 [2024-11-27 14:16:43.011099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.316 [2024-11-27 14:16:43.011113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:12.575 [2024-11-27 14:16:43.330960] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.955 14:16:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:13.955 00:17:13.955 real 0m24.012s 00:17:13.955 user 0m29.576s 00:17:13.955 sys 0m3.812s 00:17:13.955 14:16:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.955 14:16:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.955 ************************************ 00:17:13.955 END TEST raid_rebuild_test_sb 00:17:13.955 ************************************ 00:17:13.955 14:16:44 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:17:13.955 14:16:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:13.955 14:16:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.955 14:16:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.955 ************************************ 00:17:13.955 START TEST raid_rebuild_test_io 00:17:13.955 ************************************ 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76708 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76708 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76708 ']' 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.956 14:16:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.956 [2024-11-27 14:16:44.651535] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:13.956 [2024-11-27 14:16:44.651740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:13.956 Zero copy mechanism will not be used. 00:17:13.956 -allocations --file-prefix=spdk_pid76708 ] 00:17:13.956 [2024-11-27 14:16:44.824956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.216 [2024-11-27 14:16:44.952533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.475 [2024-11-27 14:16:45.181155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.475 [2024-11-27 14:16:45.181282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.735 BaseBdev1_malloc 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.735 [2024-11-27 14:16:45.555553] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:14.735 [2024-11-27 14:16:45.555622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.735 [2024-11-27 14:16:45.555644] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:14.735 [2024-11-27 14:16:45.555654] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.735 [2024-11-27 14:16:45.557887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.735 [2024-11-27 14:16:45.557942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:14.735 BaseBdev1 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.735 BaseBdev2_malloc 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.735 [2024-11-27 14:16:45.611629] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:14.735 [2024-11-27 14:16:45.611710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.735 [2024-11-27 14:16:45.611738] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:14.735 [2024-11-27 14:16:45.611749] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.735 [2024-11-27 14:16:45.613938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.735 [2024-11-27 14:16:45.613996] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:14.735 BaseBdev2 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.735 spare_malloc 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.735 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.995 spare_delay 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.995 [2024-11-27 14:16:45.695250] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:14.995 [2024-11-27 14:16:45.695323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.995 [2024-11-27 14:16:45.695348] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:14.995 [2024-11-27 14:16:45.695360] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.995 [2024-11-27 14:16:45.697868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.995 [2024-11-27 14:16:45.697951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:14.995 spare 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.995 [2024-11-27 14:16:45.707291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.995 [2024-11-27 14:16:45.709304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.995 [2024-11-27 14:16:45.709444] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:14.995 [2024-11-27 14:16:45.709479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:14.995 [2024-11-27 14:16:45.709816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:14.995 [2024-11-27 14:16:45.710039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:14.995 [2024-11-27 14:16:45.710056] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:14.995 [2024-11-27 14:16:45.710260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.995 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.995 "name": "raid_bdev1", 00:17:14.995 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:14.995 "strip_size_kb": 0, 00:17:14.995 "state": "online", 00:17:14.996 "raid_level": "raid1", 00:17:14.996 "superblock": false, 00:17:14.996 "num_base_bdevs": 2, 00:17:14.996 "num_base_bdevs_discovered": 2, 00:17:14.996 "num_base_bdevs_operational": 2, 00:17:14.996 "base_bdevs_list": [ 00:17:14.996 { 00:17:14.996 "name": "BaseBdev1", 00:17:14.996 "uuid": "743c2943-8ba1-5c46-8a79-d204be19b58e", 00:17:14.996 "is_configured": true, 00:17:14.996 "data_offset": 0, 00:17:14.996 "data_size": 65536 00:17:14.996 }, 00:17:14.996 { 00:17:14.996 "name": "BaseBdev2", 00:17:14.996 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:14.996 "is_configured": true, 00:17:14.996 "data_offset": 0, 00:17:14.996 "data_size": 65536 00:17:14.996 } 00:17:14.996 ] 00:17:14.996 }' 00:17:14.996 14:16:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.996 14:16:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.255 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:15.255 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:15.255 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.255 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.255 [2024-11-27 14:16:46.190788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.514 [2024-11-27 14:16:46.290269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.514 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.515 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.515 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.515 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.515 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.515 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.515 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.515 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.515 "name": "raid_bdev1", 00:17:15.515 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:15.515 "strip_size_kb": 0, 00:17:15.515 "state": "online", 00:17:15.515 "raid_level": "raid1", 00:17:15.515 "superblock": false, 00:17:15.515 "num_base_bdevs": 2, 00:17:15.515 "num_base_bdevs_discovered": 1, 00:17:15.515 "num_base_bdevs_operational": 1, 00:17:15.515 "base_bdevs_list": [ 00:17:15.515 { 00:17:15.515 "name": null, 00:17:15.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.515 "is_configured": false, 00:17:15.515 "data_offset": 0, 00:17:15.515 "data_size": 65536 00:17:15.515 }, 00:17:15.515 { 00:17:15.515 "name": "BaseBdev2", 00:17:15.515 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:15.515 "is_configured": true, 00:17:15.515 "data_offset": 0, 00:17:15.515 "data_size": 65536 00:17:15.515 } 00:17:15.515 ] 00:17:15.515 }' 00:17:15.515 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.515 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.515 [2024-11-27 14:16:46.428034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:15.515 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:15.515 Zero copy mechanism will not be used. 00:17:15.515 Running I/O for 60 seconds... 00:17:16.084 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:16.084 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.084 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.084 [2024-11-27 14:16:46.776285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.084 14:16:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.084 14:16:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:16.084 [2024-11-27 14:16:46.823970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:16.084 [2024-11-27 14:16:46.826246] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.084 [2024-11-27 14:16:46.965430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:16.084 [2024-11-27 14:16:46.966174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:16.342 [2024-11-27 14:16:47.190773] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:16.342 [2024-11-27 14:16:47.191244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:16.860 149.00 IOPS, 447.00 MiB/s [2024-11-27T14:16:47.816Z] [2024-11-27 14:16:47.572492] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:16.860 [2024-11-27 14:16:47.572920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.119 "name": "raid_bdev1", 00:17:17.119 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:17.119 "strip_size_kb": 0, 00:17:17.119 "state": "online", 00:17:17.119 "raid_level": "raid1", 00:17:17.119 "superblock": false, 00:17:17.119 "num_base_bdevs": 2, 00:17:17.119 "num_base_bdevs_discovered": 2, 00:17:17.119 "num_base_bdevs_operational": 2, 00:17:17.119 "process": { 00:17:17.119 "type": "rebuild", 00:17:17.119 "target": "spare", 00:17:17.119 "progress": { 00:17:17.119 "blocks": 12288, 00:17:17.119 "percent": 18 00:17:17.119 } 00:17:17.119 }, 00:17:17.119 "base_bdevs_list": [ 00:17:17.119 { 00:17:17.119 "name": "spare", 00:17:17.119 "uuid": "98e4c44b-c5c6-555e-a2ea-8208cd42d42a", 00:17:17.119 "is_configured": true, 00:17:17.119 "data_offset": 0, 00:17:17.119 "data_size": 65536 00:17:17.119 }, 00:17:17.119 { 00:17:17.119 "name": "BaseBdev2", 00:17:17.119 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:17.119 "is_configured": true, 00:17:17.119 "data_offset": 0, 00:17:17.119 "data_size": 65536 00:17:17.119 } 00:17:17.119 ] 00:17:17.119 }' 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.119 14:16:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.119 [2024-11-27 14:16:47.983890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.119 [2024-11-27 14:16:48.038793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:17.119 [2024-11-27 14:16:48.039133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:17.378 [2024-11-27 14:16:48.140765] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.378 [2024-11-27 14:16:48.151190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.378 [2024-11-27 14:16:48.151296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.378 [2024-11-27 14:16:48.151321] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.378 [2024-11-27 14:16:48.204472] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:17:17.378 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.378 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.378 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.378 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.378 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.379 "name": "raid_bdev1", 00:17:17.379 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:17.379 "strip_size_kb": 0, 00:17:17.379 "state": "online", 00:17:17.379 "raid_level": "raid1", 00:17:17.379 "superblock": false, 00:17:17.379 "num_base_bdevs": 2, 00:17:17.379 "num_base_bdevs_discovered": 1, 00:17:17.379 "num_base_bdevs_operational": 1, 00:17:17.379 "base_bdevs_list": [ 00:17:17.379 { 00:17:17.379 "name": null, 00:17:17.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.379 "is_configured": false, 00:17:17.379 "data_offset": 0, 00:17:17.379 "data_size": 65536 00:17:17.379 }, 00:17:17.379 { 00:17:17.379 "name": "BaseBdev2", 00:17:17.379 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:17.379 "is_configured": true, 00:17:17.379 "data_offset": 0, 00:17:17.379 "data_size": 65536 00:17:17.379 } 00:17:17.379 ] 00:17:17.379 }' 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.379 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.896 147.00 IOPS, 441.00 MiB/s [2024-11-27T14:16:48.852Z] 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.896 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.896 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.896 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.896 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.896 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.896 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.896 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.896 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.896 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.896 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.896 "name": "raid_bdev1", 00:17:17.896 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:17.896 "strip_size_kb": 0, 00:17:17.896 "state": "online", 00:17:17.896 "raid_level": "raid1", 00:17:17.896 "superblock": false, 00:17:17.896 "num_base_bdevs": 2, 00:17:17.896 "num_base_bdevs_discovered": 1, 00:17:17.896 "num_base_bdevs_operational": 1, 00:17:17.896 "base_bdevs_list": [ 00:17:17.896 { 00:17:17.896 "name": null, 00:17:17.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.896 "is_configured": false, 00:17:17.896 "data_offset": 0, 00:17:17.897 "data_size": 65536 00:17:17.897 }, 00:17:17.897 { 00:17:17.897 "name": "BaseBdev2", 00:17:17.897 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:17.897 "is_configured": true, 00:17:17.897 "data_offset": 0, 00:17:17.897 "data_size": 65536 00:17:17.897 } 00:17:17.897 ] 00:17:17.897 }' 00:17:17.897 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.897 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.897 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.897 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.897 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:17.897 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.897 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.897 [2024-11-27 14:16:48.831935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.155 14:16:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.155 14:16:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:18.155 [2024-11-27 14:16:48.906931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:18.155 [2024-11-27 14:16:48.909149] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:18.155 [2024-11-27 14:16:49.025262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:18.155 [2024-11-27 14:16:49.026044] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:18.418 [2024-11-27 14:16:49.244452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:18.418 [2024-11-27 14:16:49.244837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:18.679 145.33 IOPS, 436.00 MiB/s [2024-11-27T14:16:49.635Z] [2024-11-27 14:16:49.614929] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:18.939 [2024-11-27 14:16:49.851341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:18.939 [2024-11-27 14:16:49.851928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:18.939 14:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.939 14:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.939 14:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.939 14:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.939 14:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.198 14:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.198 14:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.198 14:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.198 14:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.198 14:16:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.198 14:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.198 "name": "raid_bdev1", 00:17:19.198 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:19.198 "strip_size_kb": 0, 00:17:19.198 "state": "online", 00:17:19.198 "raid_level": "raid1", 00:17:19.198 "superblock": false, 00:17:19.198 "num_base_bdevs": 2, 00:17:19.198 "num_base_bdevs_discovered": 2, 00:17:19.198 "num_base_bdevs_operational": 2, 00:17:19.198 "process": { 00:17:19.198 "type": "rebuild", 00:17:19.198 "target": "spare", 00:17:19.198 "progress": { 00:17:19.198 "blocks": 14336, 00:17:19.198 "percent": 21 00:17:19.198 } 00:17:19.198 }, 00:17:19.198 "base_bdevs_list": [ 00:17:19.198 { 00:17:19.198 "name": "spare", 00:17:19.198 "uuid": "98e4c44b-c5c6-555e-a2ea-8208cd42d42a", 00:17:19.198 "is_configured": true, 00:17:19.198 "data_offset": 0, 00:17:19.198 "data_size": 65536 00:17:19.198 }, 00:17:19.198 { 00:17:19.198 "name": "BaseBdev2", 00:17:19.198 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:19.198 "is_configured": true, 00:17:19.198 "data_offset": 0, 00:17:19.198 "data_size": 65536 00:17:19.198 } 00:17:19.198 ] 00:17:19.198 }' 00:17:19.198 14:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.198 [2024-11-27 14:16:49.979748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:19.198 14:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.198 14:16:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=418 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.198 "name": "raid_bdev1", 00:17:19.198 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:19.198 "strip_size_kb": 0, 00:17:19.198 "state": "online", 00:17:19.198 "raid_level": "raid1", 00:17:19.198 "superblock": false, 00:17:19.198 "num_base_bdevs": 2, 00:17:19.198 "num_base_bdevs_discovered": 2, 00:17:19.198 "num_base_bdevs_operational": 2, 00:17:19.198 "process": { 00:17:19.198 "type": "rebuild", 00:17:19.198 "target": "spare", 00:17:19.198 "progress": { 00:17:19.198 "blocks": 16384, 00:17:19.198 "percent": 25 00:17:19.198 } 00:17:19.198 }, 00:17:19.198 "base_bdevs_list": [ 00:17:19.198 { 00:17:19.198 "name": "spare", 00:17:19.198 "uuid": "98e4c44b-c5c6-555e-a2ea-8208cd42d42a", 00:17:19.198 "is_configured": true, 00:17:19.198 "data_offset": 0, 00:17:19.198 "data_size": 65536 00:17:19.198 }, 00:17:19.198 { 00:17:19.198 "name": "BaseBdev2", 00:17:19.198 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:19.198 "is_configured": true, 00:17:19.198 "data_offset": 0, 00:17:19.198 "data_size": 65536 00:17:19.198 } 00:17:19.198 ] 00:17:19.198 }' 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.198 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.457 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.457 14:16:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.976 127.00 IOPS, 381.00 MiB/s [2024-11-27T14:16:50.932Z] [2024-11-27 14:16:50.772430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:20.236 [2024-11-27 14:16:50.999267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.495 "name": "raid_bdev1", 00:17:20.495 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:20.495 "strip_size_kb": 0, 00:17:20.495 "state": "online", 00:17:20.495 "raid_level": "raid1", 00:17:20.495 "superblock": false, 00:17:20.495 "num_base_bdevs": 2, 00:17:20.495 "num_base_bdevs_discovered": 2, 00:17:20.495 "num_base_bdevs_operational": 2, 00:17:20.495 "process": { 00:17:20.495 "type": "rebuild", 00:17:20.495 "target": "spare", 00:17:20.495 "progress": { 00:17:20.495 "blocks": 32768, 00:17:20.495 "percent": 50 00:17:20.495 } 00:17:20.495 }, 00:17:20.495 "base_bdevs_list": [ 00:17:20.495 { 00:17:20.495 "name": "spare", 00:17:20.495 "uuid": "98e4c44b-c5c6-555e-a2ea-8208cd42d42a", 00:17:20.495 "is_configured": true, 00:17:20.495 "data_offset": 0, 00:17:20.495 "data_size": 65536 00:17:20.495 }, 00:17:20.495 { 00:17:20.495 "name": "BaseBdev2", 00:17:20.495 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:20.495 "is_configured": true, 00:17:20.495 "data_offset": 0, 00:17:20.495 "data_size": 65536 00:17:20.495 } 00:17:20.495 ] 00:17:20.495 }' 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.495 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.496 14:16:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.496 [2024-11-27 14:16:51.415512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:21.064 111.40 IOPS, 334.20 MiB/s [2024-11-27T14:16:52.020Z] [2024-11-27 14:16:51.842523] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:21.633 [2024-11-27 14:16:52.274294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.633 "name": "raid_bdev1", 00:17:21.633 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:21.633 "strip_size_kb": 0, 00:17:21.633 "state": "online", 00:17:21.633 "raid_level": "raid1", 00:17:21.633 "superblock": false, 00:17:21.633 "num_base_bdevs": 2, 00:17:21.633 "num_base_bdevs_discovered": 2, 00:17:21.633 "num_base_bdevs_operational": 2, 00:17:21.633 "process": { 00:17:21.633 "type": "rebuild", 00:17:21.633 "target": "spare", 00:17:21.633 "progress": { 00:17:21.633 "blocks": 51200, 00:17:21.633 "percent": 78 00:17:21.633 } 00:17:21.633 }, 00:17:21.633 "base_bdevs_list": [ 00:17:21.633 { 00:17:21.633 "name": "spare", 00:17:21.633 "uuid": "98e4c44b-c5c6-555e-a2ea-8208cd42d42a", 00:17:21.633 "is_configured": true, 00:17:21.633 "data_offset": 0, 00:17:21.633 "data_size": 65536 00:17:21.633 }, 00:17:21.633 { 00:17:21.633 "name": "BaseBdev2", 00:17:21.633 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:21.633 "is_configured": true, 00:17:21.633 "data_offset": 0, 00:17:21.633 "data_size": 65536 00:17:21.633 } 00:17:21.633 ] 00:17:21.633 }' 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.633 99.17 IOPS, 297.50 MiB/s [2024-11-27T14:16:52.589Z] 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.633 14:16:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.633 [2024-11-27 14:16:52.477653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:21.893 [2024-11-27 14:16:52.799627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:17:22.460 [2024-11-27 14:16:53.314274] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:22.460 [2024-11-27 14:16:53.352681] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:22.460 [2024-11-27 14:16:53.355084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.722 89.86 IOPS, 269.57 MiB/s [2024-11-27T14:16:53.678Z] 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.722 "name": "raid_bdev1", 00:17:22.722 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:22.722 "strip_size_kb": 0, 00:17:22.722 "state": "online", 00:17:22.722 "raid_level": "raid1", 00:17:22.722 "superblock": false, 00:17:22.722 "num_base_bdevs": 2, 00:17:22.722 "num_base_bdevs_discovered": 2, 00:17:22.722 "num_base_bdevs_operational": 2, 00:17:22.722 "base_bdevs_list": [ 00:17:22.722 { 00:17:22.722 "name": "spare", 00:17:22.722 "uuid": "98e4c44b-c5c6-555e-a2ea-8208cd42d42a", 00:17:22.722 "is_configured": true, 00:17:22.722 "data_offset": 0, 00:17:22.722 "data_size": 65536 00:17:22.722 }, 00:17:22.722 { 00:17:22.722 "name": "BaseBdev2", 00:17:22.722 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:22.722 "is_configured": true, 00:17:22.722 "data_offset": 0, 00:17:22.722 "data_size": 65536 00:17:22.722 } 00:17:22.722 ] 00:17:22.722 }' 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:22.722 "name": "raid_bdev1", 00:17:22.722 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:22.722 "strip_size_kb": 0, 00:17:22.722 "state": "online", 00:17:22.722 "raid_level": "raid1", 00:17:22.722 "superblock": false, 00:17:22.722 "num_base_bdevs": 2, 00:17:22.722 "num_base_bdevs_discovered": 2, 00:17:22.722 "num_base_bdevs_operational": 2, 00:17:22.722 "base_bdevs_list": [ 00:17:22.722 { 00:17:22.722 "name": "spare", 00:17:22.722 "uuid": "98e4c44b-c5c6-555e-a2ea-8208cd42d42a", 00:17:22.722 "is_configured": true, 00:17:22.722 "data_offset": 0, 00:17:22.722 "data_size": 65536 00:17:22.722 }, 00:17:22.722 { 00:17:22.722 "name": "BaseBdev2", 00:17:22.722 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:22.722 "is_configured": true, 00:17:22.722 "data_offset": 0, 00:17:22.722 "data_size": 65536 00:17:22.722 } 00:17:22.722 ] 00:17:22.722 }' 00:17:22.722 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.986 "name": "raid_bdev1", 00:17:22.986 "uuid": "3e0575dc-352c-40d1-a6dc-9969f1b70418", 00:17:22.986 "strip_size_kb": 0, 00:17:22.986 "state": "online", 00:17:22.986 "raid_level": "raid1", 00:17:22.986 "superblock": false, 00:17:22.986 "num_base_bdevs": 2, 00:17:22.986 "num_base_bdevs_discovered": 2, 00:17:22.986 "num_base_bdevs_operational": 2, 00:17:22.986 "base_bdevs_list": [ 00:17:22.986 { 00:17:22.986 "name": "spare", 00:17:22.986 "uuid": "98e4c44b-c5c6-555e-a2ea-8208cd42d42a", 00:17:22.986 "is_configured": true, 00:17:22.986 "data_offset": 0, 00:17:22.986 "data_size": 65536 00:17:22.986 }, 00:17:22.986 { 00:17:22.986 "name": "BaseBdev2", 00:17:22.986 "uuid": "68e2fe7b-7c4d-5391-97ca-e5808169ae21", 00:17:22.986 "is_configured": true, 00:17:22.986 "data_offset": 0, 00:17:22.986 "data_size": 65536 00:17:22.986 } 00:17:22.986 ] 00:17:22.986 }' 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.986 14:16:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.557 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.557 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.557 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.557 [2024-11-27 14:16:54.262055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.557 [2024-11-27 14:16:54.262162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.557 00:17:23.557 Latency(us) 00:17:23.557 [2024-11-27T14:16:54.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.557 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:23.557 raid_bdev1 : 7.91 82.80 248.40 0.00 0.00 16562.65 334.48 111726.00 00:17:23.557 [2024-11-27T14:16:54.513Z] =================================================================================================================== 00:17:23.557 [2024-11-27T14:16:54.513Z] Total : 82.80 248.40 0.00 0.00 16562.65 334.48 111726.00 00:17:23.557 [2024-11-27 14:16:54.353876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.557 [2024-11-27 14:16:54.354015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.557 [2024-11-27 14:16:54.354158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.557 [2024-11-27 14:16:54.354226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:23.557 { 00:17:23.557 "results": [ 00:17:23.557 { 00:17:23.557 "job": "raid_bdev1", 00:17:23.557 "core_mask": "0x1", 00:17:23.557 "workload": "randrw", 00:17:23.557 "percentage": 50, 00:17:23.557 "status": "finished", 00:17:23.557 "queue_depth": 2, 00:17:23.557 "io_size": 3145728, 00:17:23.557 "runtime": 7.910628, 00:17:23.557 "iops": 82.80000020225954, 00:17:23.557 "mibps": 248.40000060677863, 00:17:23.557 "io_failed": 0, 00:17:23.557 "io_timeout": 0, 00:17:23.557 "avg_latency_us": 16562.650264342148, 00:17:23.557 "min_latency_us": 334.4768558951965, 00:17:23.557 "max_latency_us": 111726.00174672488 00:17:23.557 } 00:17:23.557 ], 00:17:23.557 "core_count": 1 00:17:23.557 } 00:17:23.557 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.558 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:23.817 /dev/nbd0 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.817 1+0 records in 00:17:23.817 1+0 records out 00:17:23.817 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427823 s, 9.6 MB/s 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.817 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:17:24.076 /dev/nbd1 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.076 1+0 records in 00:17:24.076 1+0 records out 00:17:24.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470328 s, 8.7 MB/s 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:24.076 14:16:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:24.334 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:24.334 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.334 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:24.334 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.334 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:24.334 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.334 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.594 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76708 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76708 ']' 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76708 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76708 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:24.853 killing process with pid 76708 00:17:24.853 Received shutdown signal, test time was about 9.375938 seconds 00:17:24.853 00:17:24.853 Latency(us) 00:17:24.853 [2024-11-27T14:16:55.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.853 [2024-11-27T14:16:55.809Z] =================================================================================================================== 00:17:24.853 [2024-11-27T14:16:55.809Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76708' 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76708 00:17:24.853 [2024-11-27 14:16:55.788342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.853 14:16:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76708 00:17:25.421 [2024-11-27 14:16:56.073292] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:26.797 00:17:26.797 real 0m12.955s 00:17:26.797 user 0m16.420s 00:17:26.797 sys 0m1.574s 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.797 ************************************ 00:17:26.797 END TEST raid_rebuild_test_io 00:17:26.797 ************************************ 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.797 14:16:57 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:17:26.797 14:16:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:26.797 14:16:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.797 14:16:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:26.797 ************************************ 00:17:26.797 START TEST raid_rebuild_test_sb_io 00:17:26.797 ************************************ 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77094 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77094 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77094 ']' 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.797 14:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:26.797 [2024-11-27 14:16:57.672037] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:26.797 [2024-11-27 14:16:57.672193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77094 ] 00:17:26.797 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:26.797 Zero copy mechanism will not be used. 00:17:27.057 [2024-11-27 14:16:57.839674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.057 [2024-11-27 14:16:57.972366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.316 [2024-11-27 14:16:58.194096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.316 [2024-11-27 14:16:58.194173] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.885 BaseBdev1_malloc 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.885 [2024-11-27 14:16:58.602743] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:27.885 [2024-11-27 14:16:58.602907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.885 [2024-11-27 14:16:58.602939] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:27.885 [2024-11-27 14:16:58.602955] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.885 [2024-11-27 14:16:58.605578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.885 [2024-11-27 14:16:58.605642] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:27.885 BaseBdev1 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.885 BaseBdev2_malloc 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.885 [2024-11-27 14:16:58.661014] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:27.885 [2024-11-27 14:16:58.661082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.885 [2024-11-27 14:16:58.661106] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:27.885 [2024-11-27 14:16:58.661133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.885 [2024-11-27 14:16:58.663455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.885 [2024-11-27 14:16:58.663494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:27.885 BaseBdev2 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.885 spare_malloc 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.885 spare_delay 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.885 [2024-11-27 14:16:58.748097] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:27.885 [2024-11-27 14:16:58.748176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.885 [2024-11-27 14:16:58.748199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:27.885 [2024-11-27 14:16:58.748211] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.885 [2024-11-27 14:16:58.750441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.885 [2024-11-27 14:16:58.750484] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:27.885 spare 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.885 [2024-11-27 14:16:58.760178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.885 [2024-11-27 14:16:58.761958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.885 [2024-11-27 14:16:58.762153] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:27.885 [2024-11-27 14:16:58.762169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:27.885 [2024-11-27 14:16:58.762413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:27.885 [2024-11-27 14:16:58.762584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:27.885 [2024-11-27 14:16:58.762593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:27.885 [2024-11-27 14:16:58.762757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.885 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.886 "name": "raid_bdev1", 00:17:27.886 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:27.886 "strip_size_kb": 0, 00:17:27.886 "state": "online", 00:17:27.886 "raid_level": "raid1", 00:17:27.886 "superblock": true, 00:17:27.886 "num_base_bdevs": 2, 00:17:27.886 "num_base_bdevs_discovered": 2, 00:17:27.886 "num_base_bdevs_operational": 2, 00:17:27.886 "base_bdevs_list": [ 00:17:27.886 { 00:17:27.886 "name": "BaseBdev1", 00:17:27.886 "uuid": "1470e90f-fa23-5aa3-bb32-b95bb094fb06", 00:17:27.886 "is_configured": true, 00:17:27.886 "data_offset": 2048, 00:17:27.886 "data_size": 63488 00:17:27.886 }, 00:17:27.886 { 00:17:27.886 "name": "BaseBdev2", 00:17:27.886 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:27.886 "is_configured": true, 00:17:27.886 "data_offset": 2048, 00:17:27.886 "data_size": 63488 00:17:27.886 } 00:17:27.886 ] 00:17:27.886 }' 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.886 14:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:28.452 [2024-11-27 14:16:59.275564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.452 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.452 [2024-11-27 14:16:59.375060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.453 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.711 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.711 "name": "raid_bdev1", 00:17:28.711 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:28.711 "strip_size_kb": 0, 00:17:28.711 "state": "online", 00:17:28.711 "raid_level": "raid1", 00:17:28.711 "superblock": true, 00:17:28.711 "num_base_bdevs": 2, 00:17:28.711 "num_base_bdevs_discovered": 1, 00:17:28.711 "num_base_bdevs_operational": 1, 00:17:28.711 "base_bdevs_list": [ 00:17:28.711 { 00:17:28.711 "name": null, 00:17:28.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.711 "is_configured": false, 00:17:28.711 "data_offset": 0, 00:17:28.711 "data_size": 63488 00:17:28.711 }, 00:17:28.711 { 00:17:28.711 "name": "BaseBdev2", 00:17:28.711 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:28.711 "is_configured": true, 00:17:28.711 "data_offset": 2048, 00:17:28.711 "data_size": 63488 00:17:28.711 } 00:17:28.711 ] 00:17:28.711 }' 00:17:28.711 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.711 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.711 [2024-11-27 14:16:59.474524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:28.711 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:28.711 Zero copy mechanism will not be used. 00:17:28.711 Running I/O for 60 seconds... 00:17:28.970 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:28.970 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.970 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.970 [2024-11-27 14:16:59.836631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:28.970 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.970 14:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:28.970 [2024-11-27 14:16:59.876516] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:28.970 [2024-11-27 14:16:59.878475] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:29.229 [2024-11-27 14:16:59.991763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:29.229 [2024-11-27 14:16:59.992376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:29.229 [2024-11-27 14:17:00.130634] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:29.799 [2024-11-27 14:17:00.469333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:29.799 215.00 IOPS, 645.00 MiB/s [2024-11-27T14:17:00.755Z] [2024-11-27 14:17:00.683232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:29.799 [2024-11-27 14:17:00.683675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.059 "name": "raid_bdev1", 00:17:30.059 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:30.059 "strip_size_kb": 0, 00:17:30.059 "state": "online", 00:17:30.059 "raid_level": "raid1", 00:17:30.059 "superblock": true, 00:17:30.059 "num_base_bdevs": 2, 00:17:30.059 "num_base_bdevs_discovered": 2, 00:17:30.059 "num_base_bdevs_operational": 2, 00:17:30.059 "process": { 00:17:30.059 "type": "rebuild", 00:17:30.059 "target": "spare", 00:17:30.059 "progress": { 00:17:30.059 "blocks": 10240, 00:17:30.059 "percent": 16 00:17:30.059 } 00:17:30.059 }, 00:17:30.059 "base_bdevs_list": [ 00:17:30.059 { 00:17:30.059 "name": "spare", 00:17:30.059 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:30.059 "is_configured": true, 00:17:30.059 "data_offset": 2048, 00:17:30.059 "data_size": 63488 00:17:30.059 }, 00:17:30.059 { 00:17:30.059 "name": "BaseBdev2", 00:17:30.059 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:30.059 "is_configured": true, 00:17:30.059 "data_offset": 2048, 00:17:30.059 "data_size": 63488 00:17:30.059 } 00:17:30.059 ] 00:17:30.059 }' 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.059 14:17:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.059 [2024-11-27 14:17:00.991087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.319 [2024-11-27 14:17:01.121564] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:30.319 [2024-11-27 14:17:01.124357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.319 [2024-11-27 14:17:01.124398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.319 [2024-11-27 14:17:01.124414] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:30.319 [2024-11-27 14:17:01.166214] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.319 "name": "raid_bdev1", 00:17:30.319 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:30.319 "strip_size_kb": 0, 00:17:30.319 "state": "online", 00:17:30.319 "raid_level": "raid1", 00:17:30.319 "superblock": true, 00:17:30.319 "num_base_bdevs": 2, 00:17:30.319 "num_base_bdevs_discovered": 1, 00:17:30.319 "num_base_bdevs_operational": 1, 00:17:30.319 "base_bdevs_list": [ 00:17:30.319 { 00:17:30.319 "name": null, 00:17:30.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.319 "is_configured": false, 00:17:30.319 "data_offset": 0, 00:17:30.319 "data_size": 63488 00:17:30.319 }, 00:17:30.319 { 00:17:30.319 "name": "BaseBdev2", 00:17:30.319 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:30.319 "is_configured": true, 00:17:30.319 "data_offset": 2048, 00:17:30.319 "data_size": 63488 00:17:30.319 } 00:17:30.319 ] 00:17:30.319 }' 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.319 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.838 174.50 IOPS, 523.50 MiB/s [2024-11-27T14:17:01.794Z] 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.838 "name": "raid_bdev1", 00:17:30.838 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:30.838 "strip_size_kb": 0, 00:17:30.838 "state": "online", 00:17:30.838 "raid_level": "raid1", 00:17:30.838 "superblock": true, 00:17:30.838 "num_base_bdevs": 2, 00:17:30.838 "num_base_bdevs_discovered": 1, 00:17:30.838 "num_base_bdevs_operational": 1, 00:17:30.838 "base_bdevs_list": [ 00:17:30.838 { 00:17:30.838 "name": null, 00:17:30.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.838 "is_configured": false, 00:17:30.838 "data_offset": 0, 00:17:30.838 "data_size": 63488 00:17:30.838 }, 00:17:30.838 { 00:17:30.838 "name": "BaseBdev2", 00:17:30.838 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:30.838 "is_configured": true, 00:17:30.838 "data_offset": 2048, 00:17:30.838 "data_size": 63488 00:17:30.838 } 00:17:30.838 ] 00:17:30.838 }' 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.838 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.154 [2024-11-27 14:17:01.800097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:31.154 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.154 14:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:31.154 [2024-11-27 14:17:01.861506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:31.154 [2024-11-27 14:17:01.863476] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:31.154 [2024-11-27 14:17:01.986563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:31.413 [2024-11-27 14:17:02.100901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:31.413 [2024-11-27 14:17:02.101287] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:31.672 [2024-11-27 14:17:02.421384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:31.932 182.33 IOPS, 547.00 MiB/s [2024-11-27T14:17:02.888Z] [2024-11-27 14:17:02.642650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:31.932 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.932 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.932 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.932 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.932 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.932 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.932 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.932 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.932 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:31.932 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.191 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.191 "name": "raid_bdev1", 00:17:32.192 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:32.192 "strip_size_kb": 0, 00:17:32.192 "state": "online", 00:17:32.192 "raid_level": "raid1", 00:17:32.192 "superblock": true, 00:17:32.192 "num_base_bdevs": 2, 00:17:32.192 "num_base_bdevs_discovered": 2, 00:17:32.192 "num_base_bdevs_operational": 2, 00:17:32.192 "process": { 00:17:32.192 "type": "rebuild", 00:17:32.192 "target": "spare", 00:17:32.192 "progress": { 00:17:32.192 "blocks": 12288, 00:17:32.192 "percent": 19 00:17:32.192 } 00:17:32.192 }, 00:17:32.192 "base_bdevs_list": [ 00:17:32.192 { 00:17:32.192 "name": "spare", 00:17:32.192 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:32.192 "is_configured": true, 00:17:32.192 "data_offset": 2048, 00:17:32.192 "data_size": 63488 00:17:32.192 }, 00:17:32.192 { 00:17:32.192 "name": "BaseBdev2", 00:17:32.192 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:32.192 "is_configured": true, 00:17:32.192 "data_offset": 2048, 00:17:32.192 "data_size": 63488 00:17:32.192 } 00:17:32.192 ] 00:17:32.192 }' 00:17:32.192 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.192 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.192 14:17:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.192 [2024-11-27 14:17:02.968654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:32.192 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=431 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.192 "name": "raid_bdev1", 00:17:32.192 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:32.192 "strip_size_kb": 0, 00:17:32.192 "state": "online", 00:17:32.192 "raid_level": "raid1", 00:17:32.192 "superblock": true, 00:17:32.192 "num_base_bdevs": 2, 00:17:32.192 "num_base_bdevs_discovered": 2, 00:17:32.192 "num_base_bdevs_operational": 2, 00:17:32.192 "process": { 00:17:32.192 "type": "rebuild", 00:17:32.192 "target": "spare", 00:17:32.192 "progress": { 00:17:32.192 "blocks": 14336, 00:17:32.192 "percent": 22 00:17:32.192 } 00:17:32.192 }, 00:17:32.192 "base_bdevs_list": [ 00:17:32.192 { 00:17:32.192 "name": "spare", 00:17:32.192 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:32.192 "is_configured": true, 00:17:32.192 "data_offset": 2048, 00:17:32.192 "data_size": 63488 00:17:32.192 }, 00:17:32.192 { 00:17:32.192 "name": "BaseBdev2", 00:17:32.192 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:32.192 "is_configured": true, 00:17:32.192 "data_offset": 2048, 00:17:32.192 "data_size": 63488 00:17:32.192 } 00:17:32.192 ] 00:17:32.192 }' 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.192 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.451 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.451 14:17:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.451 [2024-11-27 14:17:03.202185] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:32.969 154.25 IOPS, 462.75 MiB/s [2024-11-27T14:17:03.925Z] [2024-11-27 14:17:03.834619] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:33.229 [2024-11-27 14:17:04.050417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:33.229 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.229 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.229 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.229 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.229 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.229 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.229 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.229 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.229 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.229 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:33.488 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.488 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.488 "name": "raid_bdev1", 00:17:33.488 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:33.488 "strip_size_kb": 0, 00:17:33.488 "state": "online", 00:17:33.488 "raid_level": "raid1", 00:17:33.488 "superblock": true, 00:17:33.488 "num_base_bdevs": 2, 00:17:33.488 "num_base_bdevs_discovered": 2, 00:17:33.488 "num_base_bdevs_operational": 2, 00:17:33.488 "process": { 00:17:33.488 "type": "rebuild", 00:17:33.488 "target": "spare", 00:17:33.488 "progress": { 00:17:33.488 "blocks": 28672, 00:17:33.488 "percent": 45 00:17:33.488 } 00:17:33.488 }, 00:17:33.488 "base_bdevs_list": [ 00:17:33.488 { 00:17:33.488 "name": "spare", 00:17:33.488 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:33.488 "is_configured": true, 00:17:33.488 "data_offset": 2048, 00:17:33.488 "data_size": 63488 00:17:33.488 }, 00:17:33.488 { 00:17:33.488 "name": "BaseBdev2", 00:17:33.488 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:33.488 "is_configured": true, 00:17:33.488 "data_offset": 2048, 00:17:33.488 "data_size": 63488 00:17:33.488 } 00:17:33.488 ] 00:17:33.488 }' 00:17:33.488 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.488 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.488 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.488 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.488 14:17:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.488 [2024-11-27 14:17:04.370521] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:34.004 132.80 IOPS, 398.40 MiB/s [2024-11-27T14:17:04.960Z] [2024-11-27 14:17:04.802250] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:17:34.004 [2024-11-27 14:17:04.904055] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:17:34.263 [2024-11-27 14:17:05.133469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.521 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.521 "name": "raid_bdev1", 00:17:34.521 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:34.521 "strip_size_kb": 0, 00:17:34.521 "state": "online", 00:17:34.522 "raid_level": "raid1", 00:17:34.522 "superblock": true, 00:17:34.522 "num_base_bdevs": 2, 00:17:34.522 "num_base_bdevs_discovered": 2, 00:17:34.522 "num_base_bdevs_operational": 2, 00:17:34.522 "process": { 00:17:34.522 "type": "rebuild", 00:17:34.522 "target": "spare", 00:17:34.522 "progress": { 00:17:34.522 "blocks": 45056, 00:17:34.522 "percent": 70 00:17:34.522 } 00:17:34.522 }, 00:17:34.522 "base_bdevs_list": [ 00:17:34.522 { 00:17:34.522 "name": "spare", 00:17:34.522 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:34.522 "is_configured": true, 00:17:34.522 "data_offset": 2048, 00:17:34.522 "data_size": 63488 00:17:34.522 }, 00:17:34.522 { 00:17:34.522 "name": "BaseBdev2", 00:17:34.522 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:34.522 "is_configured": true, 00:17:34.522 "data_offset": 2048, 00:17:34.522 "data_size": 63488 00:17:34.522 } 00:17:34.522 ] 00:17:34.522 }' 00:17:34.522 [2024-11-27 14:17:05.356252] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:17:34.522 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.522 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.522 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.522 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.522 14:17:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:34.832 120.00 IOPS, 360.00 MiB/s [2024-11-27T14:17:05.788Z] [2024-11-27 14:17:05.679094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:35.091 [2024-11-27 14:17:05.903928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:17:35.658 [2024-11-27 14:17:06.341181] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:35.658 [2024-11-27 14:17:06.447182] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:35.658 [2024-11-27 14:17:06.450098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.658 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.658 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.658 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.658 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.658 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.658 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.658 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.658 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.658 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.658 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.658 107.57 IOPS, 322.71 MiB/s [2024-11-27T14:17:06.614Z] 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.658 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.658 "name": "raid_bdev1", 00:17:35.658 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:35.658 "strip_size_kb": 0, 00:17:35.658 "state": "online", 00:17:35.658 "raid_level": "raid1", 00:17:35.658 "superblock": true, 00:17:35.658 "num_base_bdevs": 2, 00:17:35.658 "num_base_bdevs_discovered": 2, 00:17:35.658 "num_base_bdevs_operational": 2, 00:17:35.658 "base_bdevs_list": [ 00:17:35.658 { 00:17:35.658 "name": "spare", 00:17:35.658 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:35.658 "is_configured": true, 00:17:35.658 "data_offset": 2048, 00:17:35.658 "data_size": 63488 00:17:35.658 }, 00:17:35.658 { 00:17:35.658 "name": "BaseBdev2", 00:17:35.658 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:35.659 "is_configured": true, 00:17:35.659 "data_offset": 2048, 00:17:35.659 "data_size": 63488 00:17:35.659 } 00:17:35.659 ] 00:17:35.659 }' 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.659 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.917 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.917 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.917 "name": "raid_bdev1", 00:17:35.917 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:35.917 "strip_size_kb": 0, 00:17:35.917 "state": "online", 00:17:35.917 "raid_level": "raid1", 00:17:35.917 "superblock": true, 00:17:35.917 "num_base_bdevs": 2, 00:17:35.917 "num_base_bdevs_discovered": 2, 00:17:35.917 "num_base_bdevs_operational": 2, 00:17:35.917 "base_bdevs_list": [ 00:17:35.917 { 00:17:35.917 "name": "spare", 00:17:35.917 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:35.917 "is_configured": true, 00:17:35.917 "data_offset": 2048, 00:17:35.917 "data_size": 63488 00:17:35.917 }, 00:17:35.917 { 00:17:35.917 "name": "BaseBdev2", 00:17:35.917 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:35.917 "is_configured": true, 00:17:35.918 "data_offset": 2048, 00:17:35.918 "data_size": 63488 00:17:35.918 } 00:17:35.918 ] 00:17:35.918 }' 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.918 "name": "raid_bdev1", 00:17:35.918 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:35.918 "strip_size_kb": 0, 00:17:35.918 "state": "online", 00:17:35.918 "raid_level": "raid1", 00:17:35.918 "superblock": true, 00:17:35.918 "num_base_bdevs": 2, 00:17:35.918 "num_base_bdevs_discovered": 2, 00:17:35.918 "num_base_bdevs_operational": 2, 00:17:35.918 "base_bdevs_list": [ 00:17:35.918 { 00:17:35.918 "name": "spare", 00:17:35.918 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:35.918 "is_configured": true, 00:17:35.918 "data_offset": 2048, 00:17:35.918 "data_size": 63488 00:17:35.918 }, 00:17:35.918 { 00:17:35.918 "name": "BaseBdev2", 00:17:35.918 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:35.918 "is_configured": true, 00:17:35.918 "data_offset": 2048, 00:17:35.918 "data_size": 63488 00:17:35.918 } 00:17:35.918 ] 00:17:35.918 }' 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.918 14:17:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.486 [2024-11-27 14:17:07.192485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.486 [2024-11-27 14:17:07.192516] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.486 00:17:36.486 Latency(us) 00:17:36.486 [2024-11-27T14:17:07.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.486 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:36.486 raid_bdev1 : 7.74 100.10 300.29 0.00 0.00 13162.80 311.22 115847.04 00:17:36.486 [2024-11-27T14:17:07.442Z] =================================================================================================================== 00:17:36.486 [2024-11-27T14:17:07.442Z] Total : 100.10 300.29 0.00 0.00 13162.80 311.22 115847.04 00:17:36.486 [2024-11-27 14:17:07.226608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.486 [2024-11-27 14:17:07.226678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.486 [2024-11-27 14:17:07.226757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.486 [2024-11-27 14:17:07.226766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:36.486 { 00:17:36.486 "results": [ 00:17:36.486 { 00:17:36.486 "job": "raid_bdev1", 00:17:36.486 "core_mask": "0x1", 00:17:36.486 "workload": "randrw", 00:17:36.486 "percentage": 50, 00:17:36.486 "status": "finished", 00:17:36.486 "queue_depth": 2, 00:17:36.486 "io_size": 3145728, 00:17:36.486 "runtime": 7.742633, 00:17:36.486 "iops": 100.09514851084896, 00:17:36.486 "mibps": 300.2854455325469, 00:17:36.486 "io_failed": 0, 00:17:36.486 "io_timeout": 0, 00:17:36.486 "avg_latency_us": 13162.801512325681, 00:17:36.486 "min_latency_us": 311.22445414847164, 00:17:36.486 "max_latency_us": 115847.04279475982 00:17:36.486 } 00:17:36.486 ], 00:17:36.486 "core_count": 1 00:17:36.486 } 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:36.486 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:36.745 /dev/nbd0 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:36.745 1+0 records in 00:17:36.745 1+0 records out 00:17:36.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480328 s, 8.5 MB/s 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:36.745 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:17:37.003 /dev/nbd1 00:17:37.003 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:37.003 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:37.003 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:37.003 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:37.003 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:37.003 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:37.004 1+0 records in 00:17:37.004 1+0 records out 00:17:37.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266982 s, 15.3 MB/s 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:37.004 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:37.262 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:37.262 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.262 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:37.262 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:37.262 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:37.262 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.262 14:17:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.262 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.519 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.519 [2024-11-27 14:17:08.439375] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:37.519 [2024-11-27 14:17:08.439478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.519 [2024-11-27 14:17:08.439539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:37.519 [2024-11-27 14:17:08.439568] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.519 [2024-11-27 14:17:08.441782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.520 [2024-11-27 14:17:08.441856] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:37.520 [2024-11-27 14:17:08.442003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:37.520 [2024-11-27 14:17:08.442083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.520 [2024-11-27 14:17:08.442281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:37.520 spare 00:17:37.520 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.520 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:37.520 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.520 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.777 [2024-11-27 14:17:08.542233] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:37.777 [2024-11-27 14:17:08.542351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:37.777 [2024-11-27 14:17:08.542722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:17:37.777 [2024-11-27 14:17:08.542967] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:37.777 [2024-11-27 14:17:08.543015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:37.777 [2024-11-27 14:17:08.543301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.777 "name": "raid_bdev1", 00:17:37.777 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:37.777 "strip_size_kb": 0, 00:17:37.777 "state": "online", 00:17:37.777 "raid_level": "raid1", 00:17:37.777 "superblock": true, 00:17:37.777 "num_base_bdevs": 2, 00:17:37.777 "num_base_bdevs_discovered": 2, 00:17:37.777 "num_base_bdevs_operational": 2, 00:17:37.777 "base_bdevs_list": [ 00:17:37.777 { 00:17:37.777 "name": "spare", 00:17:37.777 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:37.777 "is_configured": true, 00:17:37.777 "data_offset": 2048, 00:17:37.777 "data_size": 63488 00:17:37.777 }, 00:17:37.777 { 00:17:37.777 "name": "BaseBdev2", 00:17:37.777 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:37.777 "is_configured": true, 00:17:37.777 "data_offset": 2048, 00:17:37.777 "data_size": 63488 00:17:37.777 } 00:17:37.777 ] 00:17:37.777 }' 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.777 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.341 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.341 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.341 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.341 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.341 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.341 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.341 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.341 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.341 14:17:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.341 "name": "raid_bdev1", 00:17:38.341 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:38.341 "strip_size_kb": 0, 00:17:38.341 "state": "online", 00:17:38.341 "raid_level": "raid1", 00:17:38.341 "superblock": true, 00:17:38.341 "num_base_bdevs": 2, 00:17:38.341 "num_base_bdevs_discovered": 2, 00:17:38.341 "num_base_bdevs_operational": 2, 00:17:38.341 "base_bdevs_list": [ 00:17:38.341 { 00:17:38.341 "name": "spare", 00:17:38.341 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:38.341 "is_configured": true, 00:17:38.341 "data_offset": 2048, 00:17:38.341 "data_size": 63488 00:17:38.341 }, 00:17:38.341 { 00:17:38.341 "name": "BaseBdev2", 00:17:38.341 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:38.341 "is_configured": true, 00:17:38.341 "data_offset": 2048, 00:17:38.341 "data_size": 63488 00:17:38.341 } 00:17:38.341 ] 00:17:38.341 }' 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.341 [2024-11-27 14:17:09.150369] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.341 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.341 "name": "raid_bdev1", 00:17:38.341 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:38.341 "strip_size_kb": 0, 00:17:38.341 "state": "online", 00:17:38.341 "raid_level": "raid1", 00:17:38.341 "superblock": true, 00:17:38.341 "num_base_bdevs": 2, 00:17:38.342 "num_base_bdevs_discovered": 1, 00:17:38.342 "num_base_bdevs_operational": 1, 00:17:38.342 "base_bdevs_list": [ 00:17:38.342 { 00:17:38.342 "name": null, 00:17:38.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.342 "is_configured": false, 00:17:38.342 "data_offset": 0, 00:17:38.342 "data_size": 63488 00:17:38.342 }, 00:17:38.342 { 00:17:38.342 "name": "BaseBdev2", 00:17:38.342 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:38.342 "is_configured": true, 00:17:38.342 "data_offset": 2048, 00:17:38.342 "data_size": 63488 00:17:38.342 } 00:17:38.342 ] 00:17:38.342 }' 00:17:38.342 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.342 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.907 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.907 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.907 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:38.907 [2024-11-27 14:17:09.573772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.907 [2024-11-27 14:17:09.574073] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:38.907 [2024-11-27 14:17:09.574097] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:38.907 [2024-11-27 14:17:09.574157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.907 [2024-11-27 14:17:09.590784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:17:38.907 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.907 14:17:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:38.907 [2024-11-27 14:17:09.592710] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.858 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.858 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.858 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.858 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.858 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.858 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.858 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.858 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.858 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.858 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.858 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.858 "name": "raid_bdev1", 00:17:39.858 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:39.858 "strip_size_kb": 0, 00:17:39.858 "state": "online", 00:17:39.858 "raid_level": "raid1", 00:17:39.858 "superblock": true, 00:17:39.858 "num_base_bdevs": 2, 00:17:39.858 "num_base_bdevs_discovered": 2, 00:17:39.858 "num_base_bdevs_operational": 2, 00:17:39.858 "process": { 00:17:39.858 "type": "rebuild", 00:17:39.858 "target": "spare", 00:17:39.858 "progress": { 00:17:39.858 "blocks": 20480, 00:17:39.858 "percent": 32 00:17:39.858 } 00:17:39.858 }, 00:17:39.858 "base_bdevs_list": [ 00:17:39.859 { 00:17:39.859 "name": "spare", 00:17:39.859 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:39.859 "is_configured": true, 00:17:39.859 "data_offset": 2048, 00:17:39.859 "data_size": 63488 00:17:39.859 }, 00:17:39.859 { 00:17:39.859 "name": "BaseBdev2", 00:17:39.859 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:39.859 "is_configured": true, 00:17:39.859 "data_offset": 2048, 00:17:39.859 "data_size": 63488 00:17:39.859 } 00:17:39.859 ] 00:17:39.859 }' 00:17:39.859 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.859 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.859 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.859 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.859 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:39.859 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.859 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.859 [2024-11-27 14:17:10.756466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.859 [2024-11-27 14:17:10.798794] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:39.859 [2024-11-27 14:17:10.798935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.859 [2024-11-27 14:17:10.798975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:39.859 [2024-11-27 14:17:10.799001] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.118 "name": "raid_bdev1", 00:17:40.118 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:40.118 "strip_size_kb": 0, 00:17:40.118 "state": "online", 00:17:40.118 "raid_level": "raid1", 00:17:40.118 "superblock": true, 00:17:40.118 "num_base_bdevs": 2, 00:17:40.118 "num_base_bdevs_discovered": 1, 00:17:40.118 "num_base_bdevs_operational": 1, 00:17:40.118 "base_bdevs_list": [ 00:17:40.118 { 00:17:40.118 "name": null, 00:17:40.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.118 "is_configured": false, 00:17:40.118 "data_offset": 0, 00:17:40.118 "data_size": 63488 00:17:40.118 }, 00:17:40.118 { 00:17:40.118 "name": "BaseBdev2", 00:17:40.118 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:40.118 "is_configured": true, 00:17:40.118 "data_offset": 2048, 00:17:40.118 "data_size": 63488 00:17:40.118 } 00:17:40.118 ] 00:17:40.118 }' 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.118 14:17:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.376 14:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:40.376 14:17:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.376 14:17:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.376 [2024-11-27 14:17:11.297110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:40.376 [2024-11-27 14:17:11.297196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.376 [2024-11-27 14:17:11.297219] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:40.376 [2024-11-27 14:17:11.297232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.376 [2024-11-27 14:17:11.297718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.376 [2024-11-27 14:17:11.297740] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:40.376 [2024-11-27 14:17:11.297842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:40.376 [2024-11-27 14:17:11.297858] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:40.376 [2024-11-27 14:17:11.297868] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:40.376 [2024-11-27 14:17:11.297892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.376 [2024-11-27 14:17:11.315014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:17:40.376 spare 00:17:40.376 14:17:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.376 14:17:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:40.376 [2024-11-27 14:17:11.317031] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.756 "name": "raid_bdev1", 00:17:41.756 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:41.756 "strip_size_kb": 0, 00:17:41.756 "state": "online", 00:17:41.756 "raid_level": "raid1", 00:17:41.756 "superblock": true, 00:17:41.756 "num_base_bdevs": 2, 00:17:41.756 "num_base_bdevs_discovered": 2, 00:17:41.756 "num_base_bdevs_operational": 2, 00:17:41.756 "process": { 00:17:41.756 "type": "rebuild", 00:17:41.756 "target": "spare", 00:17:41.756 "progress": { 00:17:41.756 "blocks": 20480, 00:17:41.756 "percent": 32 00:17:41.756 } 00:17:41.756 }, 00:17:41.756 "base_bdevs_list": [ 00:17:41.756 { 00:17:41.756 "name": "spare", 00:17:41.756 "uuid": "3da5abcf-64a0-571d-b635-d40084b23f23", 00:17:41.756 "is_configured": true, 00:17:41.756 "data_offset": 2048, 00:17:41.756 "data_size": 63488 00:17:41.756 }, 00:17:41.756 { 00:17:41.756 "name": "BaseBdev2", 00:17:41.756 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:41.756 "is_configured": true, 00:17:41.756 "data_offset": 2048, 00:17:41.756 "data_size": 63488 00:17:41.756 } 00:17:41.756 ] 00:17:41.756 }' 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.756 [2024-11-27 14:17:12.472606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.756 [2024-11-27 14:17:12.522869] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:41.756 [2024-11-27 14:17:12.522984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.756 [2024-11-27 14:17:12.523005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.756 [2024-11-27 14:17:12.523014] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.756 "name": "raid_bdev1", 00:17:41.756 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:41.756 "strip_size_kb": 0, 00:17:41.756 "state": "online", 00:17:41.756 "raid_level": "raid1", 00:17:41.756 "superblock": true, 00:17:41.756 "num_base_bdevs": 2, 00:17:41.756 "num_base_bdevs_discovered": 1, 00:17:41.756 "num_base_bdevs_operational": 1, 00:17:41.756 "base_bdevs_list": [ 00:17:41.756 { 00:17:41.756 "name": null, 00:17:41.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.756 "is_configured": false, 00:17:41.756 "data_offset": 0, 00:17:41.756 "data_size": 63488 00:17:41.756 }, 00:17:41.756 { 00:17:41.756 "name": "BaseBdev2", 00:17:41.756 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:41.756 "is_configured": true, 00:17:41.756 "data_offset": 2048, 00:17:41.756 "data_size": 63488 00:17:41.756 } 00:17:41.756 ] 00:17:41.756 }' 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.756 14:17:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.325 "name": "raid_bdev1", 00:17:42.325 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:42.325 "strip_size_kb": 0, 00:17:42.325 "state": "online", 00:17:42.325 "raid_level": "raid1", 00:17:42.325 "superblock": true, 00:17:42.325 "num_base_bdevs": 2, 00:17:42.325 "num_base_bdevs_discovered": 1, 00:17:42.325 "num_base_bdevs_operational": 1, 00:17:42.325 "base_bdevs_list": [ 00:17:42.325 { 00:17:42.325 "name": null, 00:17:42.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.325 "is_configured": false, 00:17:42.325 "data_offset": 0, 00:17:42.325 "data_size": 63488 00:17:42.325 }, 00:17:42.325 { 00:17:42.325 "name": "BaseBdev2", 00:17:42.325 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:42.325 "is_configured": true, 00:17:42.325 "data_offset": 2048, 00:17:42.325 "data_size": 63488 00:17:42.325 } 00:17:42.325 ] 00:17:42.325 }' 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.325 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.326 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.326 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.326 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:42.326 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.326 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.326 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.326 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:42.326 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.326 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.326 [2024-11-27 14:17:13.188964] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:42.326 [2024-11-27 14:17:13.189028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.326 [2024-11-27 14:17:13.189060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:42.326 [2024-11-27 14:17:13.189071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.326 [2024-11-27 14:17:13.189543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.326 [2024-11-27 14:17:13.189574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:42.326 [2024-11-27 14:17:13.189666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:42.326 [2024-11-27 14:17:13.189681] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:42.326 [2024-11-27 14:17:13.189693] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:42.326 [2024-11-27 14:17:13.189702] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:42.326 BaseBdev1 00:17:42.326 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.326 14:17:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.264 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.523 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.523 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.523 "name": "raid_bdev1", 00:17:43.523 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:43.523 "strip_size_kb": 0, 00:17:43.523 "state": "online", 00:17:43.523 "raid_level": "raid1", 00:17:43.523 "superblock": true, 00:17:43.523 "num_base_bdevs": 2, 00:17:43.523 "num_base_bdevs_discovered": 1, 00:17:43.523 "num_base_bdevs_operational": 1, 00:17:43.523 "base_bdevs_list": [ 00:17:43.523 { 00:17:43.523 "name": null, 00:17:43.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.523 "is_configured": false, 00:17:43.523 "data_offset": 0, 00:17:43.523 "data_size": 63488 00:17:43.523 }, 00:17:43.523 { 00:17:43.523 "name": "BaseBdev2", 00:17:43.523 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:43.523 "is_configured": true, 00:17:43.523 "data_offset": 2048, 00:17:43.523 "data_size": 63488 00:17:43.523 } 00:17:43.523 ] 00:17:43.523 }' 00:17:43.523 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.523 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.783 "name": "raid_bdev1", 00:17:43.783 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:43.783 "strip_size_kb": 0, 00:17:43.783 "state": "online", 00:17:43.783 "raid_level": "raid1", 00:17:43.783 "superblock": true, 00:17:43.783 "num_base_bdevs": 2, 00:17:43.783 "num_base_bdevs_discovered": 1, 00:17:43.783 "num_base_bdevs_operational": 1, 00:17:43.783 "base_bdevs_list": [ 00:17:43.783 { 00:17:43.783 "name": null, 00:17:43.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.783 "is_configured": false, 00:17:43.783 "data_offset": 0, 00:17:43.783 "data_size": 63488 00:17:43.783 }, 00:17:43.783 { 00:17:43.783 "name": "BaseBdev2", 00:17:43.783 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:43.783 "is_configured": true, 00:17:43.783 "data_offset": 2048, 00:17:43.783 "data_size": 63488 00:17:43.783 } 00:17:43.783 ] 00:17:43.783 }' 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.783 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:44.043 [2024-11-27 14:17:14.766433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.043 [2024-11-27 14:17:14.766594] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:44.043 [2024-11-27 14:17:14.766609] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:44.043 request: 00:17:44.043 { 00:17:44.043 "base_bdev": "BaseBdev1", 00:17:44.043 "raid_bdev": "raid_bdev1", 00:17:44.043 "method": "bdev_raid_add_base_bdev", 00:17:44.043 "req_id": 1 00:17:44.043 } 00:17:44.043 Got JSON-RPC error response 00:17:44.043 response: 00:17:44.043 { 00:17:44.043 "code": -22, 00:17:44.043 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:44.043 } 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.043 14:17:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.075 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.075 "name": "raid_bdev1", 00:17:45.076 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:45.076 "strip_size_kb": 0, 00:17:45.076 "state": "online", 00:17:45.076 "raid_level": "raid1", 00:17:45.076 "superblock": true, 00:17:45.076 "num_base_bdevs": 2, 00:17:45.076 "num_base_bdevs_discovered": 1, 00:17:45.076 "num_base_bdevs_operational": 1, 00:17:45.076 "base_bdevs_list": [ 00:17:45.076 { 00:17:45.076 "name": null, 00:17:45.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.076 "is_configured": false, 00:17:45.076 "data_offset": 0, 00:17:45.076 "data_size": 63488 00:17:45.076 }, 00:17:45.076 { 00:17:45.076 "name": "BaseBdev2", 00:17:45.076 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:45.076 "is_configured": true, 00:17:45.076 "data_offset": 2048, 00:17:45.076 "data_size": 63488 00:17:45.076 } 00:17:45.076 ] 00:17:45.076 }' 00:17:45.076 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.076 14:17:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.335 "name": "raid_bdev1", 00:17:45.335 "uuid": "89d7ae26-c08d-4a6b-bbe5-52f2eae8756d", 00:17:45.335 "strip_size_kb": 0, 00:17:45.335 "state": "online", 00:17:45.335 "raid_level": "raid1", 00:17:45.335 "superblock": true, 00:17:45.335 "num_base_bdevs": 2, 00:17:45.335 "num_base_bdevs_discovered": 1, 00:17:45.335 "num_base_bdevs_operational": 1, 00:17:45.335 "base_bdevs_list": [ 00:17:45.335 { 00:17:45.335 "name": null, 00:17:45.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.335 "is_configured": false, 00:17:45.335 "data_offset": 0, 00:17:45.335 "data_size": 63488 00:17:45.335 }, 00:17:45.335 { 00:17:45.335 "name": "BaseBdev2", 00:17:45.335 "uuid": "6eaea6c5-ba9d-5241-80b3-a3610616476f", 00:17:45.335 "is_configured": true, 00:17:45.335 "data_offset": 2048, 00:17:45.335 "data_size": 63488 00:17:45.335 } 00:17:45.335 ] 00:17:45.335 }' 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.335 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77094 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77094 ']' 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77094 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77094 00:17:45.595 killing process with pid 77094 00:17:45.595 Received shutdown signal, test time was about 16.902423 seconds 00:17:45.595 00:17:45.595 Latency(us) 00:17:45.595 [2024-11-27T14:17:16.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.595 [2024-11-27T14:17:16.551Z] =================================================================================================================== 00:17:45.595 [2024-11-27T14:17:16.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77094' 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77094 00:17:45.595 [2024-11-27 14:17:16.346513] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.595 [2024-11-27 14:17:16.346643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.595 14:17:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77094 00:17:45.595 [2024-11-27 14:17:16.346698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.595 [2024-11-27 14:17:16.346710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:45.854 [2024-11-27 14:17:16.576093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:47.233 00:17:47.233 real 0m20.183s 00:17:47.233 user 0m26.499s 00:17:47.233 sys 0m2.162s 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.233 ************************************ 00:17:47.233 END TEST raid_rebuild_test_sb_io 00:17:47.233 ************************************ 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:47.233 14:17:17 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:17:47.233 14:17:17 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:17:47.233 14:17:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:47.233 14:17:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.233 14:17:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:47.233 ************************************ 00:17:47.233 START TEST raid_rebuild_test 00:17:47.233 ************************************ 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77778 00:17:47.233 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77778 00:17:47.234 14:17:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:47.234 14:17:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77778 ']' 00:17:47.234 14:17:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.234 14:17:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:47.234 14:17:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.234 14:17:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:47.234 14:17:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.234 [2024-11-27 14:17:17.919000] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:17:47.234 [2024-11-27 14:17:17.919221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:47.234 Zero copy mechanism will not be used. 00:17:47.234 -allocations --file-prefix=spdk_pid77778 ] 00:17:47.234 [2024-11-27 14:17:18.092483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.493 [2024-11-27 14:17:18.209084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.493 [2024-11-27 14:17:18.409606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.493 [2024-11-27 14:17:18.409773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.061 BaseBdev1_malloc 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.061 [2024-11-27 14:17:18.800239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:48.061 [2024-11-27 14:17:18.800298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.061 [2024-11-27 14:17:18.800320] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:48.061 [2024-11-27 14:17:18.800331] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.061 [2024-11-27 14:17:18.802434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.061 [2024-11-27 14:17:18.802476] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:48.061 BaseBdev1 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.061 BaseBdev2_malloc 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.061 [2024-11-27 14:17:18.855027] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:48.061 [2024-11-27 14:17:18.855090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.061 [2024-11-27 14:17:18.855134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:48.061 [2024-11-27 14:17:18.855147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.061 [2024-11-27 14:17:18.857239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.061 [2024-11-27 14:17:18.857281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:48.061 BaseBdev2 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.061 BaseBdev3_malloc 00:17:48.061 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.062 [2024-11-27 14:17:18.918294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:48.062 [2024-11-27 14:17:18.918350] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.062 [2024-11-27 14:17:18.918373] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:48.062 [2024-11-27 14:17:18.918383] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.062 [2024-11-27 14:17:18.920445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.062 [2024-11-27 14:17:18.920488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:48.062 BaseBdev3 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.062 BaseBdev4_malloc 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.062 [2024-11-27 14:17:18.973840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:48.062 [2024-11-27 14:17:18.973902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.062 [2024-11-27 14:17:18.973932] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:48.062 [2024-11-27 14:17:18.973944] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.062 [2024-11-27 14:17:18.976101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.062 [2024-11-27 14:17:18.976151] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:48.062 BaseBdev4 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.062 14:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.322 spare_malloc 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.322 spare_delay 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.322 [2024-11-27 14:17:19.038951] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:48.322 [2024-11-27 14:17:19.039007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.322 [2024-11-27 14:17:19.039027] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:48.322 [2024-11-27 14:17:19.039037] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.322 [2024-11-27 14:17:19.041146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.322 [2024-11-27 14:17:19.041184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:48.322 spare 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.322 [2024-11-27 14:17:19.050976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:48.322 [2024-11-27 14:17:19.052817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.322 [2024-11-27 14:17:19.052883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:48.322 [2024-11-27 14:17:19.052936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:48.322 [2024-11-27 14:17:19.053013] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:48.322 [2024-11-27 14:17:19.053026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:48.322 [2024-11-27 14:17:19.053298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:48.322 [2024-11-27 14:17:19.053466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:48.322 [2024-11-27 14:17:19.053486] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:48.322 [2024-11-27 14:17:19.053627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.322 "name": "raid_bdev1", 00:17:48.322 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:17:48.322 "strip_size_kb": 0, 00:17:48.322 "state": "online", 00:17:48.322 "raid_level": "raid1", 00:17:48.322 "superblock": false, 00:17:48.322 "num_base_bdevs": 4, 00:17:48.322 "num_base_bdevs_discovered": 4, 00:17:48.322 "num_base_bdevs_operational": 4, 00:17:48.322 "base_bdevs_list": [ 00:17:48.322 { 00:17:48.322 "name": "BaseBdev1", 00:17:48.322 "uuid": "a4932625-412e-5019-a72f-cd293a98d2f0", 00:17:48.322 "is_configured": true, 00:17:48.322 "data_offset": 0, 00:17:48.322 "data_size": 65536 00:17:48.322 }, 00:17:48.322 { 00:17:48.322 "name": "BaseBdev2", 00:17:48.322 "uuid": "55f9eb15-3e25-511a-a57c-d08d3b1f7de5", 00:17:48.322 "is_configured": true, 00:17:48.322 "data_offset": 0, 00:17:48.322 "data_size": 65536 00:17:48.322 }, 00:17:48.322 { 00:17:48.322 "name": "BaseBdev3", 00:17:48.322 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:17:48.322 "is_configured": true, 00:17:48.322 "data_offset": 0, 00:17:48.322 "data_size": 65536 00:17:48.322 }, 00:17:48.322 { 00:17:48.322 "name": "BaseBdev4", 00:17:48.322 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:17:48.322 "is_configured": true, 00:17:48.322 "data_offset": 0, 00:17:48.322 "data_size": 65536 00:17:48.322 } 00:17:48.322 ] 00:17:48.322 }' 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.322 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.580 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.580 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:48.580 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.580 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.580 [2024-11-27 14:17:19.486591] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.580 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.580 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:48.580 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.580 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.580 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.580 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.839 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:48.839 [2024-11-27 14:17:19.765814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:48.839 /dev/nbd0 00:17:49.099 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:49.099 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:49.099 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:49.099 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:49.099 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:49.099 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:49.099 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:49.099 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:49.099 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:49.099 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:49.100 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:49.100 1+0 records in 00:17:49.100 1+0 records out 00:17:49.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00358378 s, 1.1 MB/s 00:17:49.100 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.100 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:49.100 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:49.100 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:49.100 14:17:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:49.100 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.100 14:17:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:49.100 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:49.100 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:49.100 14:17:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:54.388 65536+0 records in 00:17:54.388 65536+0 records out 00:17:54.388 33554432 bytes (34 MB, 32 MiB) copied, 5.46281 s, 6.1 MB/s 00:17:54.388 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:54.388 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:54.388 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:54.388 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:54.388 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:54.388 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:54.388 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:54.648 [2024-11-27 14:17:25.538746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.648 [2024-11-27 14:17:25.576306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.648 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.908 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.908 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.908 "name": "raid_bdev1", 00:17:54.908 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:17:54.908 "strip_size_kb": 0, 00:17:54.908 "state": "online", 00:17:54.908 "raid_level": "raid1", 00:17:54.908 "superblock": false, 00:17:54.908 "num_base_bdevs": 4, 00:17:54.908 "num_base_bdevs_discovered": 3, 00:17:54.908 "num_base_bdevs_operational": 3, 00:17:54.908 "base_bdevs_list": [ 00:17:54.908 { 00:17:54.908 "name": null, 00:17:54.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.908 "is_configured": false, 00:17:54.908 "data_offset": 0, 00:17:54.908 "data_size": 65536 00:17:54.908 }, 00:17:54.908 { 00:17:54.908 "name": "BaseBdev2", 00:17:54.908 "uuid": "55f9eb15-3e25-511a-a57c-d08d3b1f7de5", 00:17:54.908 "is_configured": true, 00:17:54.908 "data_offset": 0, 00:17:54.908 "data_size": 65536 00:17:54.908 }, 00:17:54.908 { 00:17:54.908 "name": "BaseBdev3", 00:17:54.908 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:17:54.908 "is_configured": true, 00:17:54.908 "data_offset": 0, 00:17:54.908 "data_size": 65536 00:17:54.908 }, 00:17:54.908 { 00:17:54.908 "name": "BaseBdev4", 00:17:54.908 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:17:54.908 "is_configured": true, 00:17:54.908 "data_offset": 0, 00:17:54.908 "data_size": 65536 00:17:54.908 } 00:17:54.908 ] 00:17:54.908 }' 00:17:54.908 14:17:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.908 14:17:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.168 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:55.168 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.168 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.168 [2024-11-27 14:17:26.051499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:55.168 [2024-11-27 14:17:26.066551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:17:55.168 14:17:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.168 14:17:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:55.168 [2024-11-27 14:17:26.068564] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:56.546 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.546 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.546 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.546 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.546 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.546 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.546 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.546 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.546 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.546 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.546 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.546 "name": "raid_bdev1", 00:17:56.546 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:17:56.546 "strip_size_kb": 0, 00:17:56.546 "state": "online", 00:17:56.546 "raid_level": "raid1", 00:17:56.546 "superblock": false, 00:17:56.546 "num_base_bdevs": 4, 00:17:56.546 "num_base_bdevs_discovered": 4, 00:17:56.546 "num_base_bdevs_operational": 4, 00:17:56.546 "process": { 00:17:56.546 "type": "rebuild", 00:17:56.546 "target": "spare", 00:17:56.546 "progress": { 00:17:56.546 "blocks": 20480, 00:17:56.546 "percent": 31 00:17:56.546 } 00:17:56.546 }, 00:17:56.546 "base_bdevs_list": [ 00:17:56.546 { 00:17:56.546 "name": "spare", 00:17:56.546 "uuid": "f9f456a7-fc59-57fd-82bf-9f21539d5b3d", 00:17:56.546 "is_configured": true, 00:17:56.546 "data_offset": 0, 00:17:56.546 "data_size": 65536 00:17:56.546 }, 00:17:56.546 { 00:17:56.546 "name": "BaseBdev2", 00:17:56.546 "uuid": "55f9eb15-3e25-511a-a57c-d08d3b1f7de5", 00:17:56.546 "is_configured": true, 00:17:56.546 "data_offset": 0, 00:17:56.546 "data_size": 65536 00:17:56.546 }, 00:17:56.546 { 00:17:56.546 "name": "BaseBdev3", 00:17:56.546 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:17:56.546 "is_configured": true, 00:17:56.547 "data_offset": 0, 00:17:56.547 "data_size": 65536 00:17:56.547 }, 00:17:56.547 { 00:17:56.547 "name": "BaseBdev4", 00:17:56.547 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:17:56.547 "is_configured": true, 00:17:56.547 "data_offset": 0, 00:17:56.547 "data_size": 65536 00:17:56.547 } 00:17:56.547 ] 00:17:56.547 }' 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.547 [2024-11-27 14:17:27.223780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.547 [2024-11-27 14:17:27.273909] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:56.547 [2024-11-27 14:17:27.274076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.547 [2024-11-27 14:17:27.274167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:56.547 [2024-11-27 14:17:27.274205] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.547 "name": "raid_bdev1", 00:17:56.547 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:17:56.547 "strip_size_kb": 0, 00:17:56.547 "state": "online", 00:17:56.547 "raid_level": "raid1", 00:17:56.547 "superblock": false, 00:17:56.547 "num_base_bdevs": 4, 00:17:56.547 "num_base_bdevs_discovered": 3, 00:17:56.547 "num_base_bdevs_operational": 3, 00:17:56.547 "base_bdevs_list": [ 00:17:56.547 { 00:17:56.547 "name": null, 00:17:56.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.547 "is_configured": false, 00:17:56.547 "data_offset": 0, 00:17:56.547 "data_size": 65536 00:17:56.547 }, 00:17:56.547 { 00:17:56.547 "name": "BaseBdev2", 00:17:56.547 "uuid": "55f9eb15-3e25-511a-a57c-d08d3b1f7de5", 00:17:56.547 "is_configured": true, 00:17:56.547 "data_offset": 0, 00:17:56.547 "data_size": 65536 00:17:56.547 }, 00:17:56.547 { 00:17:56.547 "name": "BaseBdev3", 00:17:56.547 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:17:56.547 "is_configured": true, 00:17:56.547 "data_offset": 0, 00:17:56.547 "data_size": 65536 00:17:56.547 }, 00:17:56.547 { 00:17:56.547 "name": "BaseBdev4", 00:17:56.547 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:17:56.547 "is_configured": true, 00:17:56.547 "data_offset": 0, 00:17:56.547 "data_size": 65536 00:17:56.547 } 00:17:56.547 ] 00:17:56.547 }' 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.547 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.807 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.807 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.807 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.807 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.807 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.807 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.807 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.807 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.807 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.066 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.066 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.066 "name": "raid_bdev1", 00:17:57.066 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:17:57.066 "strip_size_kb": 0, 00:17:57.066 "state": "online", 00:17:57.066 "raid_level": "raid1", 00:17:57.066 "superblock": false, 00:17:57.066 "num_base_bdevs": 4, 00:17:57.066 "num_base_bdevs_discovered": 3, 00:17:57.066 "num_base_bdevs_operational": 3, 00:17:57.066 "base_bdevs_list": [ 00:17:57.066 { 00:17:57.066 "name": null, 00:17:57.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.066 "is_configured": false, 00:17:57.066 "data_offset": 0, 00:17:57.066 "data_size": 65536 00:17:57.066 }, 00:17:57.066 { 00:17:57.066 "name": "BaseBdev2", 00:17:57.066 "uuid": "55f9eb15-3e25-511a-a57c-d08d3b1f7de5", 00:17:57.066 "is_configured": true, 00:17:57.066 "data_offset": 0, 00:17:57.066 "data_size": 65536 00:17:57.066 }, 00:17:57.066 { 00:17:57.066 "name": "BaseBdev3", 00:17:57.066 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:17:57.066 "is_configured": true, 00:17:57.066 "data_offset": 0, 00:17:57.066 "data_size": 65536 00:17:57.066 }, 00:17:57.066 { 00:17:57.066 "name": "BaseBdev4", 00:17:57.066 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:17:57.066 "is_configured": true, 00:17:57.066 "data_offset": 0, 00:17:57.066 "data_size": 65536 00:17:57.066 } 00:17:57.066 ] 00:17:57.066 }' 00:17:57.066 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.066 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:57.066 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.066 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:57.066 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:57.066 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.066 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.066 [2024-11-27 14:17:27.872848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.066 [2024-11-27 14:17:27.888568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:17:57.066 14:17:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.066 14:17:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:57.066 [2024-11-27 14:17:27.890531] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:58.004 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.004 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.004 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.004 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.004 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.004 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.004 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.004 14:17:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.004 14:17:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.004 14:17:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.004 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.004 "name": "raid_bdev1", 00:17:58.004 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:17:58.004 "strip_size_kb": 0, 00:17:58.004 "state": "online", 00:17:58.004 "raid_level": "raid1", 00:17:58.004 "superblock": false, 00:17:58.004 "num_base_bdevs": 4, 00:17:58.004 "num_base_bdevs_discovered": 4, 00:17:58.004 "num_base_bdevs_operational": 4, 00:17:58.004 "process": { 00:17:58.004 "type": "rebuild", 00:17:58.004 "target": "spare", 00:17:58.004 "progress": { 00:17:58.004 "blocks": 20480, 00:17:58.004 "percent": 31 00:17:58.004 } 00:17:58.004 }, 00:17:58.004 "base_bdevs_list": [ 00:17:58.004 { 00:17:58.004 "name": "spare", 00:17:58.004 "uuid": "f9f456a7-fc59-57fd-82bf-9f21539d5b3d", 00:17:58.004 "is_configured": true, 00:17:58.004 "data_offset": 0, 00:17:58.004 "data_size": 65536 00:17:58.004 }, 00:17:58.004 { 00:17:58.004 "name": "BaseBdev2", 00:17:58.004 "uuid": "55f9eb15-3e25-511a-a57c-d08d3b1f7de5", 00:17:58.004 "is_configured": true, 00:17:58.004 "data_offset": 0, 00:17:58.004 "data_size": 65536 00:17:58.004 }, 00:17:58.004 { 00:17:58.004 "name": "BaseBdev3", 00:17:58.004 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:17:58.004 "is_configured": true, 00:17:58.004 "data_offset": 0, 00:17:58.005 "data_size": 65536 00:17:58.005 }, 00:17:58.005 { 00:17:58.005 "name": "BaseBdev4", 00:17:58.005 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:17:58.005 "is_configured": true, 00:17:58.005 "data_offset": 0, 00:17:58.005 "data_size": 65536 00:17:58.005 } 00:17:58.005 ] 00:17:58.005 }' 00:17:58.005 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.264 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.265 14:17:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.265 [2024-11-27 14:17:29.009814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:58.265 [2024-11-27 14:17:29.096024] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.265 "name": "raid_bdev1", 00:17:58.265 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:17:58.265 "strip_size_kb": 0, 00:17:58.265 "state": "online", 00:17:58.265 "raid_level": "raid1", 00:17:58.265 "superblock": false, 00:17:58.265 "num_base_bdevs": 4, 00:17:58.265 "num_base_bdevs_discovered": 3, 00:17:58.265 "num_base_bdevs_operational": 3, 00:17:58.265 "process": { 00:17:58.265 "type": "rebuild", 00:17:58.265 "target": "spare", 00:17:58.265 "progress": { 00:17:58.265 "blocks": 24576, 00:17:58.265 "percent": 37 00:17:58.265 } 00:17:58.265 }, 00:17:58.265 "base_bdevs_list": [ 00:17:58.265 { 00:17:58.265 "name": "spare", 00:17:58.265 "uuid": "f9f456a7-fc59-57fd-82bf-9f21539d5b3d", 00:17:58.265 "is_configured": true, 00:17:58.265 "data_offset": 0, 00:17:58.265 "data_size": 65536 00:17:58.265 }, 00:17:58.265 { 00:17:58.265 "name": null, 00:17:58.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.265 "is_configured": false, 00:17:58.265 "data_offset": 0, 00:17:58.265 "data_size": 65536 00:17:58.265 }, 00:17:58.265 { 00:17:58.265 "name": "BaseBdev3", 00:17:58.265 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:17:58.265 "is_configured": true, 00:17:58.265 "data_offset": 0, 00:17:58.265 "data_size": 65536 00:17:58.265 }, 00:17:58.265 { 00:17:58.265 "name": "BaseBdev4", 00:17:58.265 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:17:58.265 "is_configured": true, 00:17:58.265 "data_offset": 0, 00:17:58.265 "data_size": 65536 00:17:58.265 } 00:17:58.265 ] 00:17:58.265 }' 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.265 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=457 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.524 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.524 "name": "raid_bdev1", 00:17:58.524 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:17:58.524 "strip_size_kb": 0, 00:17:58.524 "state": "online", 00:17:58.524 "raid_level": "raid1", 00:17:58.524 "superblock": false, 00:17:58.524 "num_base_bdevs": 4, 00:17:58.524 "num_base_bdevs_discovered": 3, 00:17:58.524 "num_base_bdevs_operational": 3, 00:17:58.524 "process": { 00:17:58.524 "type": "rebuild", 00:17:58.524 "target": "spare", 00:17:58.524 "progress": { 00:17:58.524 "blocks": 26624, 00:17:58.524 "percent": 40 00:17:58.524 } 00:17:58.524 }, 00:17:58.524 "base_bdevs_list": [ 00:17:58.524 { 00:17:58.524 "name": "spare", 00:17:58.524 "uuid": "f9f456a7-fc59-57fd-82bf-9f21539d5b3d", 00:17:58.524 "is_configured": true, 00:17:58.524 "data_offset": 0, 00:17:58.524 "data_size": 65536 00:17:58.524 }, 00:17:58.524 { 00:17:58.524 "name": null, 00:17:58.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.524 "is_configured": false, 00:17:58.524 "data_offset": 0, 00:17:58.524 "data_size": 65536 00:17:58.524 }, 00:17:58.524 { 00:17:58.524 "name": "BaseBdev3", 00:17:58.524 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:17:58.524 "is_configured": true, 00:17:58.524 "data_offset": 0, 00:17:58.524 "data_size": 65536 00:17:58.524 }, 00:17:58.524 { 00:17:58.524 "name": "BaseBdev4", 00:17:58.524 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:17:58.524 "is_configured": true, 00:17:58.524 "data_offset": 0, 00:17:58.524 "data_size": 65536 00:17:58.524 } 00:17:58.524 ] 00:17:58.525 }' 00:17:58.525 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.525 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.525 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.525 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.525 14:17:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:59.459 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.460 "name": "raid_bdev1", 00:17:59.460 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:17:59.460 "strip_size_kb": 0, 00:17:59.460 "state": "online", 00:17:59.460 "raid_level": "raid1", 00:17:59.460 "superblock": false, 00:17:59.460 "num_base_bdevs": 4, 00:17:59.460 "num_base_bdevs_discovered": 3, 00:17:59.460 "num_base_bdevs_operational": 3, 00:17:59.460 "process": { 00:17:59.460 "type": "rebuild", 00:17:59.460 "target": "spare", 00:17:59.460 "progress": { 00:17:59.460 "blocks": 49152, 00:17:59.460 "percent": 75 00:17:59.460 } 00:17:59.460 }, 00:17:59.460 "base_bdevs_list": [ 00:17:59.460 { 00:17:59.460 "name": "spare", 00:17:59.460 "uuid": "f9f456a7-fc59-57fd-82bf-9f21539d5b3d", 00:17:59.460 "is_configured": true, 00:17:59.460 "data_offset": 0, 00:17:59.460 "data_size": 65536 00:17:59.460 }, 00:17:59.460 { 00:17:59.460 "name": null, 00:17:59.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.460 "is_configured": false, 00:17:59.460 "data_offset": 0, 00:17:59.460 "data_size": 65536 00:17:59.460 }, 00:17:59.460 { 00:17:59.460 "name": "BaseBdev3", 00:17:59.460 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:17:59.460 "is_configured": true, 00:17:59.460 "data_offset": 0, 00:17:59.460 "data_size": 65536 00:17:59.460 }, 00:17:59.460 { 00:17:59.460 "name": "BaseBdev4", 00:17:59.460 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:17:59.460 "is_configured": true, 00:17:59.460 "data_offset": 0, 00:17:59.460 "data_size": 65536 00:17:59.460 } 00:17:59.460 ] 00:17:59.460 }' 00:17:59.460 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.719 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:59.719 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.719 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.719 14:17:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:00.288 [2024-11-27 14:17:31.104979] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:00.288 [2024-11-27 14:17:31.105144] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:00.288 [2024-11-27 14:17:31.105202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.856 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.856 "name": "raid_bdev1", 00:18:00.856 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:18:00.856 "strip_size_kb": 0, 00:18:00.856 "state": "online", 00:18:00.856 "raid_level": "raid1", 00:18:00.856 "superblock": false, 00:18:00.856 "num_base_bdevs": 4, 00:18:00.856 "num_base_bdevs_discovered": 3, 00:18:00.856 "num_base_bdevs_operational": 3, 00:18:00.857 "base_bdevs_list": [ 00:18:00.857 { 00:18:00.857 "name": "spare", 00:18:00.857 "uuid": "f9f456a7-fc59-57fd-82bf-9f21539d5b3d", 00:18:00.857 "is_configured": true, 00:18:00.857 "data_offset": 0, 00:18:00.857 "data_size": 65536 00:18:00.857 }, 00:18:00.857 { 00:18:00.857 "name": null, 00:18:00.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.857 "is_configured": false, 00:18:00.857 "data_offset": 0, 00:18:00.857 "data_size": 65536 00:18:00.857 }, 00:18:00.857 { 00:18:00.857 "name": "BaseBdev3", 00:18:00.857 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:18:00.857 "is_configured": true, 00:18:00.857 "data_offset": 0, 00:18:00.857 "data_size": 65536 00:18:00.857 }, 00:18:00.857 { 00:18:00.857 "name": "BaseBdev4", 00:18:00.857 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:18:00.857 "is_configured": true, 00:18:00.857 "data_offset": 0, 00:18:00.857 "data_size": 65536 00:18:00.857 } 00:18:00.857 ] 00:18:00.857 }' 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.857 "name": "raid_bdev1", 00:18:00.857 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:18:00.857 "strip_size_kb": 0, 00:18:00.857 "state": "online", 00:18:00.857 "raid_level": "raid1", 00:18:00.857 "superblock": false, 00:18:00.857 "num_base_bdevs": 4, 00:18:00.857 "num_base_bdevs_discovered": 3, 00:18:00.857 "num_base_bdevs_operational": 3, 00:18:00.857 "base_bdevs_list": [ 00:18:00.857 { 00:18:00.857 "name": "spare", 00:18:00.857 "uuid": "f9f456a7-fc59-57fd-82bf-9f21539d5b3d", 00:18:00.857 "is_configured": true, 00:18:00.857 "data_offset": 0, 00:18:00.857 "data_size": 65536 00:18:00.857 }, 00:18:00.857 { 00:18:00.857 "name": null, 00:18:00.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.857 "is_configured": false, 00:18:00.857 "data_offset": 0, 00:18:00.857 "data_size": 65536 00:18:00.857 }, 00:18:00.857 { 00:18:00.857 "name": "BaseBdev3", 00:18:00.857 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:18:00.857 "is_configured": true, 00:18:00.857 "data_offset": 0, 00:18:00.857 "data_size": 65536 00:18:00.857 }, 00:18:00.857 { 00:18:00.857 "name": "BaseBdev4", 00:18:00.857 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:18:00.857 "is_configured": true, 00:18:00.857 "data_offset": 0, 00:18:00.857 "data_size": 65536 00:18:00.857 } 00:18:00.857 ] 00:18:00.857 }' 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.857 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.117 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.117 "name": "raid_bdev1", 00:18:01.117 "uuid": "0d63a9eb-8ddc-4352-8b89-bc07453ae679", 00:18:01.117 "strip_size_kb": 0, 00:18:01.117 "state": "online", 00:18:01.117 "raid_level": "raid1", 00:18:01.117 "superblock": false, 00:18:01.117 "num_base_bdevs": 4, 00:18:01.117 "num_base_bdevs_discovered": 3, 00:18:01.117 "num_base_bdevs_operational": 3, 00:18:01.117 "base_bdevs_list": [ 00:18:01.117 { 00:18:01.117 "name": "spare", 00:18:01.117 "uuid": "f9f456a7-fc59-57fd-82bf-9f21539d5b3d", 00:18:01.117 "is_configured": true, 00:18:01.117 "data_offset": 0, 00:18:01.117 "data_size": 65536 00:18:01.117 }, 00:18:01.117 { 00:18:01.117 "name": null, 00:18:01.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.117 "is_configured": false, 00:18:01.117 "data_offset": 0, 00:18:01.117 "data_size": 65536 00:18:01.117 }, 00:18:01.117 { 00:18:01.117 "name": "BaseBdev3", 00:18:01.117 "uuid": "65c66e55-f064-5d88-9566-94bba51a6f27", 00:18:01.117 "is_configured": true, 00:18:01.117 "data_offset": 0, 00:18:01.117 "data_size": 65536 00:18:01.117 }, 00:18:01.117 { 00:18:01.117 "name": "BaseBdev4", 00:18:01.117 "uuid": "cf1adc75-527b-56db-8f04-331b7c42da24", 00:18:01.117 "is_configured": true, 00:18:01.117 "data_offset": 0, 00:18:01.117 "data_size": 65536 00:18:01.117 } 00:18:01.117 ] 00:18:01.117 }' 00:18:01.117 14:17:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.117 14:17:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.377 [2024-11-27 14:17:32.262223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.377 [2024-11-27 14:17:32.262258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.377 [2024-11-27 14:17:32.262341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.377 [2024-11-27 14:17:32.262424] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.377 [2024-11-27 14:17:32.262435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:01.377 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:01.637 /dev/nbd0 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.637 1+0 records in 00:18:01.637 1+0 records out 00:18:01.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534735 s, 7.7 MB/s 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:01.637 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:01.897 /dev/nbd1 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.897 1+0 records in 00:18:01.897 1+0 records out 00:18:01.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393159 s, 10.4 MB/s 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:01.897 14:17:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:02.157 14:17:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:02.157 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.157 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:02.157 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:02.157 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:02.157 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.157 14:17:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:02.417 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:02.417 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:02.417 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:02.417 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:02.417 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:02.417 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:02.417 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:02.417 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:02.417 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.417 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77778 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77778 ']' 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77778 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77778 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77778' 00:18:02.677 killing process with pid 77778 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77778 00:18:02.677 Received shutdown signal, test time was about 60.000000 seconds 00:18:02.677 00:18:02.677 Latency(us) 00:18:02.677 [2024-11-27T14:17:33.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.677 [2024-11-27T14:17:33.633Z] =================================================================================================================== 00:18:02.677 [2024-11-27T14:17:33.633Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:02.677 [2024-11-27 14:17:33.478381] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.677 14:17:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77778 00:18:03.245 [2024-11-27 14:17:33.971613] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.184 ************************************ 00:18:04.184 END TEST raid_rebuild_test 00:18:04.184 ************************************ 00:18:04.184 14:17:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:04.184 00:18:04.184 real 0m17.281s 00:18:04.184 user 0m19.390s 00:18:04.184 sys 0m2.957s 00:18:04.184 14:17:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.184 14:17:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.444 14:17:35 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:18:04.444 14:17:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:04.444 14:17:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.444 14:17:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.444 ************************************ 00:18:04.444 START TEST raid_rebuild_test_sb 00:18:04.444 ************************************ 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.444 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78220 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78220 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78220 ']' 00:18:04.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.445 14:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.445 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:04.445 Zero copy mechanism will not be used. 00:18:04.445 [2024-11-27 14:17:35.270756] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:04.445 [2024-11-27 14:17:35.270872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78220 ] 00:18:04.705 [2024-11-27 14:17:35.445736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.705 [2024-11-27 14:17:35.570353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.982 [2024-11-27 14:17:35.773751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.982 [2024-11-27 14:17:35.773905] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.296 BaseBdev1_malloc 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.296 [2024-11-27 14:17:36.164380] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:05.296 [2024-11-27 14:17:36.164443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.296 [2024-11-27 14:17:36.164481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:05.296 [2024-11-27 14:17:36.164493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.296 [2024-11-27 14:17:36.166594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.296 [2024-11-27 14:17:36.166636] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:05.296 BaseBdev1 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.296 BaseBdev2_malloc 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.296 [2024-11-27 14:17:36.219211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:05.296 [2024-11-27 14:17:36.219286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.296 [2024-11-27 14:17:36.219309] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:05.296 [2024-11-27 14:17:36.219320] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.296 [2024-11-27 14:17:36.221512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.296 [2024-11-27 14:17:36.221596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:05.296 BaseBdev2 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.296 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.556 BaseBdev3_malloc 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.556 [2024-11-27 14:17:36.288249] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:05.556 [2024-11-27 14:17:36.288309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.556 [2024-11-27 14:17:36.288331] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:05.556 [2024-11-27 14:17:36.288344] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.556 [2024-11-27 14:17:36.290427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.556 [2024-11-27 14:17:36.290469] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:05.556 BaseBdev3 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.556 BaseBdev4_malloc 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.556 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.556 [2024-11-27 14:17:36.343891] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:05.556 [2024-11-27 14:17:36.343982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.556 [2024-11-27 14:17:36.344006] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:05.557 [2024-11-27 14:17:36.344017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.557 [2024-11-27 14:17:36.346377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.557 [2024-11-27 14:17:36.346417] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:05.557 BaseBdev4 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.557 spare_malloc 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.557 spare_delay 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.557 [2024-11-27 14:17:36.410546] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:05.557 [2024-11-27 14:17:36.410607] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.557 [2024-11-27 14:17:36.410644] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:05.557 [2024-11-27 14:17:36.410655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.557 [2024-11-27 14:17:36.412973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.557 [2024-11-27 14:17:36.413019] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:05.557 spare 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.557 [2024-11-27 14:17:36.422577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.557 [2024-11-27 14:17:36.424470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.557 [2024-11-27 14:17:36.424545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:05.557 [2024-11-27 14:17:36.424605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:05.557 [2024-11-27 14:17:36.424820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:05.557 [2024-11-27 14:17:36.424838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:05.557 [2024-11-27 14:17:36.425171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:05.557 [2024-11-27 14:17:36.425383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:05.557 [2024-11-27 14:17:36.425394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:05.557 [2024-11-27 14:17:36.425555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.557 "name": "raid_bdev1", 00:18:05.557 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:05.557 "strip_size_kb": 0, 00:18:05.557 "state": "online", 00:18:05.557 "raid_level": "raid1", 00:18:05.557 "superblock": true, 00:18:05.557 "num_base_bdevs": 4, 00:18:05.557 "num_base_bdevs_discovered": 4, 00:18:05.557 "num_base_bdevs_operational": 4, 00:18:05.557 "base_bdevs_list": [ 00:18:05.557 { 00:18:05.557 "name": "BaseBdev1", 00:18:05.557 "uuid": "3f2ef3ab-4efa-5ba4-b9de-1e9035b0cbba", 00:18:05.557 "is_configured": true, 00:18:05.557 "data_offset": 2048, 00:18:05.557 "data_size": 63488 00:18:05.557 }, 00:18:05.557 { 00:18:05.557 "name": "BaseBdev2", 00:18:05.557 "uuid": "2100f3c9-a4ab-5ec4-ad89-674ab6ddf522", 00:18:05.557 "is_configured": true, 00:18:05.557 "data_offset": 2048, 00:18:05.557 "data_size": 63488 00:18:05.557 }, 00:18:05.557 { 00:18:05.557 "name": "BaseBdev3", 00:18:05.557 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:05.557 "is_configured": true, 00:18:05.557 "data_offset": 2048, 00:18:05.557 "data_size": 63488 00:18:05.557 }, 00:18:05.557 { 00:18:05.557 "name": "BaseBdev4", 00:18:05.557 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:05.557 "is_configured": true, 00:18:05.557 "data_offset": 2048, 00:18:05.557 "data_size": 63488 00:18:05.557 } 00:18:05.557 ] 00:18:05.557 }' 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.557 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:06.126 [2024-11-27 14:17:36.874227] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:06.126 14:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:06.386 [2024-11-27 14:17:37.157402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:06.386 /dev/nbd0 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:06.386 1+0 records in 00:18:06.386 1+0 records out 00:18:06.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480993 s, 8.5 MB/s 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:06.386 14:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:11.681 63488+0 records in 00:18:11.681 63488+0 records out 00:18:11.681 32505856 bytes (33 MB, 31 MiB) copied, 5.25605 s, 6.2 MB/s 00:18:11.681 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:11.681 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:11.681 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:11.681 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:11.681 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:11.681 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:11.681 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:11.941 [2024-11-27 14:17:42.699203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.941 [2024-11-27 14:17:42.719989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.941 "name": "raid_bdev1", 00:18:11.941 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:11.941 "strip_size_kb": 0, 00:18:11.941 "state": "online", 00:18:11.941 "raid_level": "raid1", 00:18:11.941 "superblock": true, 00:18:11.941 "num_base_bdevs": 4, 00:18:11.941 "num_base_bdevs_discovered": 3, 00:18:11.941 "num_base_bdevs_operational": 3, 00:18:11.941 "base_bdevs_list": [ 00:18:11.941 { 00:18:11.941 "name": null, 00:18:11.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.941 "is_configured": false, 00:18:11.941 "data_offset": 0, 00:18:11.941 "data_size": 63488 00:18:11.941 }, 00:18:11.941 { 00:18:11.941 "name": "BaseBdev2", 00:18:11.941 "uuid": "2100f3c9-a4ab-5ec4-ad89-674ab6ddf522", 00:18:11.941 "is_configured": true, 00:18:11.941 "data_offset": 2048, 00:18:11.941 "data_size": 63488 00:18:11.941 }, 00:18:11.941 { 00:18:11.941 "name": "BaseBdev3", 00:18:11.941 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:11.941 "is_configured": true, 00:18:11.941 "data_offset": 2048, 00:18:11.941 "data_size": 63488 00:18:11.941 }, 00:18:11.941 { 00:18:11.941 "name": "BaseBdev4", 00:18:11.941 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:11.941 "is_configured": true, 00:18:11.941 "data_offset": 2048, 00:18:11.941 "data_size": 63488 00:18:11.941 } 00:18:11.941 ] 00:18:11.941 }' 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.941 14:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.510 14:17:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:12.510 14:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.510 14:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.510 [2024-11-27 14:17:43.179201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.510 [2024-11-27 14:17:43.194576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:18:12.510 14:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.510 14:17:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:12.510 [2024-11-27 14:17:43.196515] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.455 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.455 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.455 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.455 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.455 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.455 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.455 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.455 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.455 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.456 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.456 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.456 "name": "raid_bdev1", 00:18:13.456 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:13.456 "strip_size_kb": 0, 00:18:13.456 "state": "online", 00:18:13.456 "raid_level": "raid1", 00:18:13.456 "superblock": true, 00:18:13.456 "num_base_bdevs": 4, 00:18:13.456 "num_base_bdevs_discovered": 4, 00:18:13.456 "num_base_bdevs_operational": 4, 00:18:13.456 "process": { 00:18:13.456 "type": "rebuild", 00:18:13.456 "target": "spare", 00:18:13.456 "progress": { 00:18:13.456 "blocks": 20480, 00:18:13.456 "percent": 32 00:18:13.456 } 00:18:13.456 }, 00:18:13.456 "base_bdevs_list": [ 00:18:13.456 { 00:18:13.456 "name": "spare", 00:18:13.456 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:13.456 "is_configured": true, 00:18:13.456 "data_offset": 2048, 00:18:13.456 "data_size": 63488 00:18:13.456 }, 00:18:13.456 { 00:18:13.456 "name": "BaseBdev2", 00:18:13.456 "uuid": "2100f3c9-a4ab-5ec4-ad89-674ab6ddf522", 00:18:13.456 "is_configured": true, 00:18:13.456 "data_offset": 2048, 00:18:13.456 "data_size": 63488 00:18:13.456 }, 00:18:13.456 { 00:18:13.456 "name": "BaseBdev3", 00:18:13.456 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:13.456 "is_configured": true, 00:18:13.456 "data_offset": 2048, 00:18:13.456 "data_size": 63488 00:18:13.456 }, 00:18:13.456 { 00:18:13.456 "name": "BaseBdev4", 00:18:13.456 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:13.456 "is_configured": true, 00:18:13.456 "data_offset": 2048, 00:18:13.456 "data_size": 63488 00:18:13.456 } 00:18:13.456 ] 00:18:13.456 }' 00:18:13.456 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.456 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.456 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.456 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.456 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:13.456 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.456 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.456 [2024-11-27 14:17:44.367742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.456 [2024-11-27 14:17:44.401929] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:13.456 [2024-11-27 14:17:44.401992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.456 [2024-11-27 14:17:44.402009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:13.456 [2024-11-27 14:17:44.402019] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.715 "name": "raid_bdev1", 00:18:13.715 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:13.715 "strip_size_kb": 0, 00:18:13.715 "state": "online", 00:18:13.715 "raid_level": "raid1", 00:18:13.715 "superblock": true, 00:18:13.715 "num_base_bdevs": 4, 00:18:13.715 "num_base_bdevs_discovered": 3, 00:18:13.715 "num_base_bdevs_operational": 3, 00:18:13.715 "base_bdevs_list": [ 00:18:13.715 { 00:18:13.715 "name": null, 00:18:13.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.715 "is_configured": false, 00:18:13.715 "data_offset": 0, 00:18:13.715 "data_size": 63488 00:18:13.715 }, 00:18:13.715 { 00:18:13.715 "name": "BaseBdev2", 00:18:13.715 "uuid": "2100f3c9-a4ab-5ec4-ad89-674ab6ddf522", 00:18:13.715 "is_configured": true, 00:18:13.715 "data_offset": 2048, 00:18:13.715 "data_size": 63488 00:18:13.715 }, 00:18:13.715 { 00:18:13.715 "name": "BaseBdev3", 00:18:13.715 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:13.715 "is_configured": true, 00:18:13.715 "data_offset": 2048, 00:18:13.715 "data_size": 63488 00:18:13.715 }, 00:18:13.715 { 00:18:13.715 "name": "BaseBdev4", 00:18:13.715 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:13.715 "is_configured": true, 00:18:13.715 "data_offset": 2048, 00:18:13.715 "data_size": 63488 00:18:13.715 } 00:18:13.715 ] 00:18:13.715 }' 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.715 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.974 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.974 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.975 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.975 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.975 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.975 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.975 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.975 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.975 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.975 14:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.234 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.235 "name": "raid_bdev1", 00:18:14.235 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:14.235 "strip_size_kb": 0, 00:18:14.235 "state": "online", 00:18:14.235 "raid_level": "raid1", 00:18:14.235 "superblock": true, 00:18:14.235 "num_base_bdevs": 4, 00:18:14.235 "num_base_bdevs_discovered": 3, 00:18:14.235 "num_base_bdevs_operational": 3, 00:18:14.235 "base_bdevs_list": [ 00:18:14.235 { 00:18:14.235 "name": null, 00:18:14.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.235 "is_configured": false, 00:18:14.235 "data_offset": 0, 00:18:14.235 "data_size": 63488 00:18:14.235 }, 00:18:14.235 { 00:18:14.235 "name": "BaseBdev2", 00:18:14.235 "uuid": "2100f3c9-a4ab-5ec4-ad89-674ab6ddf522", 00:18:14.235 "is_configured": true, 00:18:14.235 "data_offset": 2048, 00:18:14.235 "data_size": 63488 00:18:14.235 }, 00:18:14.235 { 00:18:14.235 "name": "BaseBdev3", 00:18:14.235 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:14.235 "is_configured": true, 00:18:14.235 "data_offset": 2048, 00:18:14.235 "data_size": 63488 00:18:14.235 }, 00:18:14.235 { 00:18:14.235 "name": "BaseBdev4", 00:18:14.235 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:14.235 "is_configured": true, 00:18:14.235 "data_offset": 2048, 00:18:14.235 "data_size": 63488 00:18:14.235 } 00:18:14.235 ] 00:18:14.235 }' 00:18:14.235 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.235 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.235 14:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.235 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.235 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:14.235 14:17:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.235 14:17:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.235 [2024-11-27 14:17:45.045502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:14.235 [2024-11-27 14:17:45.059911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:18:14.235 14:17:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.235 14:17:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:14.235 [2024-11-27 14:17:45.061839] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.175 "name": "raid_bdev1", 00:18:15.175 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:15.175 "strip_size_kb": 0, 00:18:15.175 "state": "online", 00:18:15.175 "raid_level": "raid1", 00:18:15.175 "superblock": true, 00:18:15.175 "num_base_bdevs": 4, 00:18:15.175 "num_base_bdevs_discovered": 4, 00:18:15.175 "num_base_bdevs_operational": 4, 00:18:15.175 "process": { 00:18:15.175 "type": "rebuild", 00:18:15.175 "target": "spare", 00:18:15.175 "progress": { 00:18:15.175 "blocks": 20480, 00:18:15.175 "percent": 32 00:18:15.175 } 00:18:15.175 }, 00:18:15.175 "base_bdevs_list": [ 00:18:15.175 { 00:18:15.175 "name": "spare", 00:18:15.175 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:15.175 "is_configured": true, 00:18:15.175 "data_offset": 2048, 00:18:15.175 "data_size": 63488 00:18:15.175 }, 00:18:15.175 { 00:18:15.175 "name": "BaseBdev2", 00:18:15.175 "uuid": "2100f3c9-a4ab-5ec4-ad89-674ab6ddf522", 00:18:15.175 "is_configured": true, 00:18:15.175 "data_offset": 2048, 00:18:15.175 "data_size": 63488 00:18:15.175 }, 00:18:15.175 { 00:18:15.175 "name": "BaseBdev3", 00:18:15.175 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:15.175 "is_configured": true, 00:18:15.175 "data_offset": 2048, 00:18:15.175 "data_size": 63488 00:18:15.175 }, 00:18:15.175 { 00:18:15.175 "name": "BaseBdev4", 00:18:15.175 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:15.175 "is_configured": true, 00:18:15.175 "data_offset": 2048, 00:18:15.175 "data_size": 63488 00:18:15.175 } 00:18:15.175 ] 00:18:15.175 }' 00:18:15.175 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:15.436 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.436 [2024-11-27 14:17:46.221286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:15.436 [2024-11-27 14:17:46.367422] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.436 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.695 "name": "raid_bdev1", 00:18:15.695 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:15.695 "strip_size_kb": 0, 00:18:15.695 "state": "online", 00:18:15.695 "raid_level": "raid1", 00:18:15.695 "superblock": true, 00:18:15.695 "num_base_bdevs": 4, 00:18:15.695 "num_base_bdevs_discovered": 3, 00:18:15.695 "num_base_bdevs_operational": 3, 00:18:15.695 "process": { 00:18:15.695 "type": "rebuild", 00:18:15.695 "target": "spare", 00:18:15.695 "progress": { 00:18:15.695 "blocks": 24576, 00:18:15.695 "percent": 38 00:18:15.695 } 00:18:15.695 }, 00:18:15.695 "base_bdevs_list": [ 00:18:15.695 { 00:18:15.695 "name": "spare", 00:18:15.695 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:15.695 "is_configured": true, 00:18:15.695 "data_offset": 2048, 00:18:15.695 "data_size": 63488 00:18:15.695 }, 00:18:15.695 { 00:18:15.695 "name": null, 00:18:15.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.695 "is_configured": false, 00:18:15.695 "data_offset": 0, 00:18:15.695 "data_size": 63488 00:18:15.695 }, 00:18:15.695 { 00:18:15.695 "name": "BaseBdev3", 00:18:15.695 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:15.695 "is_configured": true, 00:18:15.695 "data_offset": 2048, 00:18:15.695 "data_size": 63488 00:18:15.695 }, 00:18:15.695 { 00:18:15.695 "name": "BaseBdev4", 00:18:15.695 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:15.695 "is_configured": true, 00:18:15.695 "data_offset": 2048, 00:18:15.695 "data_size": 63488 00:18:15.695 } 00:18:15.695 ] 00:18:15.695 }' 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=474 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.695 "name": "raid_bdev1", 00:18:15.695 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:15.695 "strip_size_kb": 0, 00:18:15.695 "state": "online", 00:18:15.695 "raid_level": "raid1", 00:18:15.695 "superblock": true, 00:18:15.695 "num_base_bdevs": 4, 00:18:15.695 "num_base_bdevs_discovered": 3, 00:18:15.695 "num_base_bdevs_operational": 3, 00:18:15.695 "process": { 00:18:15.695 "type": "rebuild", 00:18:15.695 "target": "spare", 00:18:15.695 "progress": { 00:18:15.695 "blocks": 26624, 00:18:15.695 "percent": 41 00:18:15.695 } 00:18:15.695 }, 00:18:15.695 "base_bdevs_list": [ 00:18:15.695 { 00:18:15.695 "name": "spare", 00:18:15.695 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:15.695 "is_configured": true, 00:18:15.695 "data_offset": 2048, 00:18:15.695 "data_size": 63488 00:18:15.695 }, 00:18:15.695 { 00:18:15.695 "name": null, 00:18:15.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.695 "is_configured": false, 00:18:15.695 "data_offset": 0, 00:18:15.695 "data_size": 63488 00:18:15.695 }, 00:18:15.695 { 00:18:15.695 "name": "BaseBdev3", 00:18:15.695 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:15.695 "is_configured": true, 00:18:15.695 "data_offset": 2048, 00:18:15.695 "data_size": 63488 00:18:15.695 }, 00:18:15.695 { 00:18:15.695 "name": "BaseBdev4", 00:18:15.695 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:15.695 "is_configured": true, 00:18:15.695 "data_offset": 2048, 00:18:15.695 "data_size": 63488 00:18:15.695 } 00:18:15.695 ] 00:18:15.695 }' 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.695 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.954 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.954 14:17:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.989 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.989 "name": "raid_bdev1", 00:18:16.989 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:16.989 "strip_size_kb": 0, 00:18:16.989 "state": "online", 00:18:16.989 "raid_level": "raid1", 00:18:16.989 "superblock": true, 00:18:16.989 "num_base_bdevs": 4, 00:18:16.989 "num_base_bdevs_discovered": 3, 00:18:16.989 "num_base_bdevs_operational": 3, 00:18:16.989 "process": { 00:18:16.989 "type": "rebuild", 00:18:16.989 "target": "spare", 00:18:16.989 "progress": { 00:18:16.989 "blocks": 51200, 00:18:16.989 "percent": 80 00:18:16.989 } 00:18:16.989 }, 00:18:16.989 "base_bdevs_list": [ 00:18:16.989 { 00:18:16.989 "name": "spare", 00:18:16.989 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:16.989 "is_configured": true, 00:18:16.989 "data_offset": 2048, 00:18:16.989 "data_size": 63488 00:18:16.989 }, 00:18:16.989 { 00:18:16.989 "name": null, 00:18:16.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.989 "is_configured": false, 00:18:16.989 "data_offset": 0, 00:18:16.989 "data_size": 63488 00:18:16.989 }, 00:18:16.989 { 00:18:16.989 "name": "BaseBdev3", 00:18:16.989 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:16.989 "is_configured": true, 00:18:16.989 "data_offset": 2048, 00:18:16.989 "data_size": 63488 00:18:16.989 }, 00:18:16.989 { 00:18:16.989 "name": "BaseBdev4", 00:18:16.989 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:16.989 "is_configured": true, 00:18:16.989 "data_offset": 2048, 00:18:16.989 "data_size": 63488 00:18:16.989 } 00:18:16.989 ] 00:18:16.990 }' 00:18:16.990 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.990 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.990 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.990 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.990 14:17:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:17.555 [2024-11-27 14:17:48.276459] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:17.555 [2024-11-27 14:17:48.276538] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:17.555 [2024-11-27 14:17:48.276661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.120 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.120 "name": "raid_bdev1", 00:18:18.120 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:18.120 "strip_size_kb": 0, 00:18:18.120 "state": "online", 00:18:18.120 "raid_level": "raid1", 00:18:18.121 "superblock": true, 00:18:18.121 "num_base_bdevs": 4, 00:18:18.121 "num_base_bdevs_discovered": 3, 00:18:18.121 "num_base_bdevs_operational": 3, 00:18:18.121 "base_bdevs_list": [ 00:18:18.121 { 00:18:18.121 "name": "spare", 00:18:18.121 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:18.121 "is_configured": true, 00:18:18.121 "data_offset": 2048, 00:18:18.121 "data_size": 63488 00:18:18.121 }, 00:18:18.121 { 00:18:18.121 "name": null, 00:18:18.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.121 "is_configured": false, 00:18:18.121 "data_offset": 0, 00:18:18.121 "data_size": 63488 00:18:18.121 }, 00:18:18.121 { 00:18:18.121 "name": "BaseBdev3", 00:18:18.121 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:18.121 "is_configured": true, 00:18:18.121 "data_offset": 2048, 00:18:18.121 "data_size": 63488 00:18:18.121 }, 00:18:18.121 { 00:18:18.121 "name": "BaseBdev4", 00:18:18.121 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:18.121 "is_configured": true, 00:18:18.121 "data_offset": 2048, 00:18:18.121 "data_size": 63488 00:18:18.121 } 00:18:18.121 ] 00:18:18.121 }' 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.121 14:17:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.121 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.121 "name": "raid_bdev1", 00:18:18.121 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:18.121 "strip_size_kb": 0, 00:18:18.121 "state": "online", 00:18:18.121 "raid_level": "raid1", 00:18:18.121 "superblock": true, 00:18:18.121 "num_base_bdevs": 4, 00:18:18.121 "num_base_bdevs_discovered": 3, 00:18:18.121 "num_base_bdevs_operational": 3, 00:18:18.121 "base_bdevs_list": [ 00:18:18.121 { 00:18:18.121 "name": "spare", 00:18:18.121 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:18.121 "is_configured": true, 00:18:18.121 "data_offset": 2048, 00:18:18.121 "data_size": 63488 00:18:18.121 }, 00:18:18.121 { 00:18:18.121 "name": null, 00:18:18.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.121 "is_configured": false, 00:18:18.121 "data_offset": 0, 00:18:18.121 "data_size": 63488 00:18:18.121 }, 00:18:18.121 { 00:18:18.121 "name": "BaseBdev3", 00:18:18.121 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:18.121 "is_configured": true, 00:18:18.121 "data_offset": 2048, 00:18:18.121 "data_size": 63488 00:18:18.121 }, 00:18:18.121 { 00:18:18.121 "name": "BaseBdev4", 00:18:18.121 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:18.121 "is_configured": true, 00:18:18.121 "data_offset": 2048, 00:18:18.121 "data_size": 63488 00:18:18.121 } 00:18:18.121 ] 00:18:18.121 }' 00:18:18.121 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.121 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.380 "name": "raid_bdev1", 00:18:18.380 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:18.380 "strip_size_kb": 0, 00:18:18.380 "state": "online", 00:18:18.380 "raid_level": "raid1", 00:18:18.380 "superblock": true, 00:18:18.380 "num_base_bdevs": 4, 00:18:18.380 "num_base_bdevs_discovered": 3, 00:18:18.380 "num_base_bdevs_operational": 3, 00:18:18.380 "base_bdevs_list": [ 00:18:18.380 { 00:18:18.380 "name": "spare", 00:18:18.380 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:18.380 "is_configured": true, 00:18:18.380 "data_offset": 2048, 00:18:18.380 "data_size": 63488 00:18:18.380 }, 00:18:18.380 { 00:18:18.380 "name": null, 00:18:18.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.380 "is_configured": false, 00:18:18.380 "data_offset": 0, 00:18:18.380 "data_size": 63488 00:18:18.380 }, 00:18:18.380 { 00:18:18.380 "name": "BaseBdev3", 00:18:18.380 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:18.380 "is_configured": true, 00:18:18.380 "data_offset": 2048, 00:18:18.380 "data_size": 63488 00:18:18.380 }, 00:18:18.380 { 00:18:18.380 "name": "BaseBdev4", 00:18:18.380 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:18.380 "is_configured": true, 00:18:18.380 "data_offset": 2048, 00:18:18.380 "data_size": 63488 00:18:18.380 } 00:18:18.380 ] 00:18:18.380 }' 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.380 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.639 [2024-11-27 14:17:49.540404] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.639 [2024-11-27 14:17:49.540438] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.639 [2024-11-27 14:17:49.540529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.639 [2024-11-27 14:17:49.540612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.639 [2024-11-27 14:17:49.540622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:18.639 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:18.898 /dev/nbd0 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.898 1+0 records in 00:18:18.898 1+0 records out 00:18:18.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471334 s, 8.7 MB/s 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:18.898 14:17:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:19.157 /dev/nbd1 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:19.415 1+0 records in 00:18:19.415 1+0 records out 00:18:19.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435059 s, 9.4 MB/s 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.415 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:19.674 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:19.674 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:19.674 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:19.674 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.674 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.674 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:19.674 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:19.674 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.674 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:19.674 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.933 [2024-11-27 14:17:50.784017] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:19.933 [2024-11-27 14:17:50.784072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.933 [2024-11-27 14:17:50.784094] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:19.933 [2024-11-27 14:17:50.784119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.933 [2024-11-27 14:17:50.786381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.933 [2024-11-27 14:17:50.786433] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:19.933 [2024-11-27 14:17:50.786531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:19.933 [2024-11-27 14:17:50.786581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.933 [2024-11-27 14:17:50.786706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:19.933 [2024-11-27 14:17:50.786788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:19.933 spare 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.933 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.192 [2024-11-27 14:17:50.886681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:20.192 [2024-11-27 14:17:50.886710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:20.192 [2024-11-27 14:17:50.887024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:18:20.192 [2024-11-27 14:17:50.887284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:20.192 [2024-11-27 14:17:50.887301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:20.192 [2024-11-27 14:17:50.887478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.192 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.193 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.193 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.193 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.193 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.193 "name": "raid_bdev1", 00:18:20.193 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:20.193 "strip_size_kb": 0, 00:18:20.193 "state": "online", 00:18:20.193 "raid_level": "raid1", 00:18:20.193 "superblock": true, 00:18:20.193 "num_base_bdevs": 4, 00:18:20.193 "num_base_bdevs_discovered": 3, 00:18:20.193 "num_base_bdevs_operational": 3, 00:18:20.193 "base_bdevs_list": [ 00:18:20.193 { 00:18:20.193 "name": "spare", 00:18:20.193 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:20.193 "is_configured": true, 00:18:20.193 "data_offset": 2048, 00:18:20.193 "data_size": 63488 00:18:20.193 }, 00:18:20.193 { 00:18:20.193 "name": null, 00:18:20.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.193 "is_configured": false, 00:18:20.193 "data_offset": 2048, 00:18:20.193 "data_size": 63488 00:18:20.193 }, 00:18:20.193 { 00:18:20.193 "name": "BaseBdev3", 00:18:20.193 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:20.193 "is_configured": true, 00:18:20.193 "data_offset": 2048, 00:18:20.193 "data_size": 63488 00:18:20.193 }, 00:18:20.193 { 00:18:20.193 "name": "BaseBdev4", 00:18:20.193 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:20.193 "is_configured": true, 00:18:20.193 "data_offset": 2048, 00:18:20.193 "data_size": 63488 00:18:20.193 } 00:18:20.193 ] 00:18:20.193 }' 00:18:20.193 14:17:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.193 14:17:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.451 "name": "raid_bdev1", 00:18:20.451 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:20.451 "strip_size_kb": 0, 00:18:20.451 "state": "online", 00:18:20.451 "raid_level": "raid1", 00:18:20.451 "superblock": true, 00:18:20.451 "num_base_bdevs": 4, 00:18:20.451 "num_base_bdevs_discovered": 3, 00:18:20.451 "num_base_bdevs_operational": 3, 00:18:20.451 "base_bdevs_list": [ 00:18:20.451 { 00:18:20.451 "name": "spare", 00:18:20.451 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:20.451 "is_configured": true, 00:18:20.451 "data_offset": 2048, 00:18:20.451 "data_size": 63488 00:18:20.451 }, 00:18:20.451 { 00:18:20.451 "name": null, 00:18:20.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.451 "is_configured": false, 00:18:20.451 "data_offset": 2048, 00:18:20.451 "data_size": 63488 00:18:20.451 }, 00:18:20.451 { 00:18:20.451 "name": "BaseBdev3", 00:18:20.451 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:20.451 "is_configured": true, 00:18:20.451 "data_offset": 2048, 00:18:20.451 "data_size": 63488 00:18:20.451 }, 00:18:20.451 { 00:18:20.451 "name": "BaseBdev4", 00:18:20.451 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:20.451 "is_configured": true, 00:18:20.451 "data_offset": 2048, 00:18:20.451 "data_size": 63488 00:18:20.451 } 00:18:20.451 ] 00:18:20.451 }' 00:18:20.451 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.710 [2024-11-27 14:17:51.494925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.710 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.711 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.711 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.711 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.711 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.711 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.711 "name": "raid_bdev1", 00:18:20.711 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:20.711 "strip_size_kb": 0, 00:18:20.711 "state": "online", 00:18:20.711 "raid_level": "raid1", 00:18:20.711 "superblock": true, 00:18:20.711 "num_base_bdevs": 4, 00:18:20.711 "num_base_bdevs_discovered": 2, 00:18:20.711 "num_base_bdevs_operational": 2, 00:18:20.711 "base_bdevs_list": [ 00:18:20.711 { 00:18:20.711 "name": null, 00:18:20.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.711 "is_configured": false, 00:18:20.711 "data_offset": 0, 00:18:20.711 "data_size": 63488 00:18:20.711 }, 00:18:20.711 { 00:18:20.711 "name": null, 00:18:20.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.711 "is_configured": false, 00:18:20.711 "data_offset": 2048, 00:18:20.711 "data_size": 63488 00:18:20.711 }, 00:18:20.711 { 00:18:20.711 "name": "BaseBdev3", 00:18:20.711 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:20.711 "is_configured": true, 00:18:20.711 "data_offset": 2048, 00:18:20.711 "data_size": 63488 00:18:20.711 }, 00:18:20.711 { 00:18:20.711 "name": "BaseBdev4", 00:18:20.711 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:20.711 "is_configured": true, 00:18:20.711 "data_offset": 2048, 00:18:20.711 "data_size": 63488 00:18:20.711 } 00:18:20.711 ] 00:18:20.711 }' 00:18:20.711 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.711 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.969 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.969 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.969 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.227 [2024-11-27 14:17:51.926198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.227 [2024-11-27 14:17:51.926541] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:21.227 [2024-11-27 14:17:51.926626] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:21.227 [2024-11-27 14:17:51.926711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.227 [2024-11-27 14:17:51.943703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:18:21.227 14:17:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.227 14:17:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:21.227 [2024-11-27 14:17:51.946029] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:22.165 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.165 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.165 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.165 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.165 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.165 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.165 14:17:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.165 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.165 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.165 14:17:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.165 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.165 "name": "raid_bdev1", 00:18:22.165 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:22.165 "strip_size_kb": 0, 00:18:22.165 "state": "online", 00:18:22.165 "raid_level": "raid1", 00:18:22.165 "superblock": true, 00:18:22.165 "num_base_bdevs": 4, 00:18:22.165 "num_base_bdevs_discovered": 3, 00:18:22.165 "num_base_bdevs_operational": 3, 00:18:22.165 "process": { 00:18:22.165 "type": "rebuild", 00:18:22.165 "target": "spare", 00:18:22.165 "progress": { 00:18:22.165 "blocks": 20480, 00:18:22.165 "percent": 32 00:18:22.165 } 00:18:22.165 }, 00:18:22.165 "base_bdevs_list": [ 00:18:22.165 { 00:18:22.165 "name": "spare", 00:18:22.165 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:22.165 "is_configured": true, 00:18:22.165 "data_offset": 2048, 00:18:22.165 "data_size": 63488 00:18:22.165 }, 00:18:22.165 { 00:18:22.165 "name": null, 00:18:22.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.165 "is_configured": false, 00:18:22.165 "data_offset": 2048, 00:18:22.165 "data_size": 63488 00:18:22.165 }, 00:18:22.165 { 00:18:22.165 "name": "BaseBdev3", 00:18:22.165 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:22.165 "is_configured": true, 00:18:22.165 "data_offset": 2048, 00:18:22.165 "data_size": 63488 00:18:22.165 }, 00:18:22.165 { 00:18:22.165 "name": "BaseBdev4", 00:18:22.165 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:22.165 "is_configured": true, 00:18:22.165 "data_offset": 2048, 00:18:22.165 "data_size": 63488 00:18:22.165 } 00:18:22.165 ] 00:18:22.165 }' 00:18:22.165 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.165 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.165 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.165 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.165 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:22.165 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.165 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.165 [2024-11-27 14:17:53.109062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.424 [2024-11-27 14:17:53.151993] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:22.424 [2024-11-27 14:17:53.152133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.424 [2024-11-27 14:17:53.152156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:22.424 [2024-11-27 14:17:53.152165] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.424 "name": "raid_bdev1", 00:18:22.424 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:22.424 "strip_size_kb": 0, 00:18:22.424 "state": "online", 00:18:22.424 "raid_level": "raid1", 00:18:22.424 "superblock": true, 00:18:22.424 "num_base_bdevs": 4, 00:18:22.424 "num_base_bdevs_discovered": 2, 00:18:22.424 "num_base_bdevs_operational": 2, 00:18:22.424 "base_bdevs_list": [ 00:18:22.424 { 00:18:22.424 "name": null, 00:18:22.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.424 "is_configured": false, 00:18:22.424 "data_offset": 0, 00:18:22.424 "data_size": 63488 00:18:22.424 }, 00:18:22.424 { 00:18:22.424 "name": null, 00:18:22.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.424 "is_configured": false, 00:18:22.424 "data_offset": 2048, 00:18:22.424 "data_size": 63488 00:18:22.424 }, 00:18:22.424 { 00:18:22.424 "name": "BaseBdev3", 00:18:22.424 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:22.424 "is_configured": true, 00:18:22.424 "data_offset": 2048, 00:18:22.424 "data_size": 63488 00:18:22.424 }, 00:18:22.424 { 00:18:22.424 "name": "BaseBdev4", 00:18:22.424 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:22.424 "is_configured": true, 00:18:22.424 "data_offset": 2048, 00:18:22.424 "data_size": 63488 00:18:22.424 } 00:18:22.424 ] 00:18:22.424 }' 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.424 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.992 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:22.992 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.992 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.992 [2024-11-27 14:17:53.654509] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:22.992 [2024-11-27 14:17:53.654653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:22.992 [2024-11-27 14:17:53.654704] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:22.992 [2024-11-27 14:17:53.654737] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:22.992 [2024-11-27 14:17:53.655260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:22.992 [2024-11-27 14:17:53.655331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:22.992 [2024-11-27 14:17:53.655462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:22.992 [2024-11-27 14:17:53.655505] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:22.992 [2024-11-27 14:17:53.655549] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:22.992 [2024-11-27 14:17:53.655608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.992 [2024-11-27 14:17:53.670057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:18:22.992 spare 00:18:22.992 14:17:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.992 [2024-11-27 14:17:53.671944] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:22.992 14:17:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:23.929 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.929 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.929 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.929 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.929 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.929 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.929 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.929 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.929 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.929 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.930 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.930 "name": "raid_bdev1", 00:18:23.930 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:23.930 "strip_size_kb": 0, 00:18:23.930 "state": "online", 00:18:23.930 "raid_level": "raid1", 00:18:23.930 "superblock": true, 00:18:23.930 "num_base_bdevs": 4, 00:18:23.930 "num_base_bdevs_discovered": 3, 00:18:23.930 "num_base_bdevs_operational": 3, 00:18:23.930 "process": { 00:18:23.930 "type": "rebuild", 00:18:23.930 "target": "spare", 00:18:23.930 "progress": { 00:18:23.930 "blocks": 20480, 00:18:23.930 "percent": 32 00:18:23.930 } 00:18:23.930 }, 00:18:23.930 "base_bdevs_list": [ 00:18:23.930 { 00:18:23.930 "name": "spare", 00:18:23.930 "uuid": "6d328132-0b03-59f3-a4e2-85021d4488bf", 00:18:23.930 "is_configured": true, 00:18:23.930 "data_offset": 2048, 00:18:23.930 "data_size": 63488 00:18:23.930 }, 00:18:23.930 { 00:18:23.930 "name": null, 00:18:23.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.930 "is_configured": false, 00:18:23.930 "data_offset": 2048, 00:18:23.930 "data_size": 63488 00:18:23.930 }, 00:18:23.930 { 00:18:23.930 "name": "BaseBdev3", 00:18:23.930 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:23.930 "is_configured": true, 00:18:23.930 "data_offset": 2048, 00:18:23.930 "data_size": 63488 00:18:23.930 }, 00:18:23.930 { 00:18:23.930 "name": "BaseBdev4", 00:18:23.930 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:23.930 "is_configured": true, 00:18:23.930 "data_offset": 2048, 00:18:23.930 "data_size": 63488 00:18:23.930 } 00:18:23.930 ] 00:18:23.930 }' 00:18:23.930 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.930 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.930 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.930 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.930 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:23.930 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.930 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.930 [2024-11-27 14:17:54.835976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.930 [2024-11-27 14:17:54.877320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:23.930 [2024-11-27 14:17:54.877426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.930 [2024-11-27 14:17:54.877444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.930 [2024-11-27 14:17:54.877454] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.189 "name": "raid_bdev1", 00:18:24.189 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:24.189 "strip_size_kb": 0, 00:18:24.189 "state": "online", 00:18:24.189 "raid_level": "raid1", 00:18:24.189 "superblock": true, 00:18:24.189 "num_base_bdevs": 4, 00:18:24.189 "num_base_bdevs_discovered": 2, 00:18:24.189 "num_base_bdevs_operational": 2, 00:18:24.189 "base_bdevs_list": [ 00:18:24.189 { 00:18:24.189 "name": null, 00:18:24.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.189 "is_configured": false, 00:18:24.189 "data_offset": 0, 00:18:24.189 "data_size": 63488 00:18:24.189 }, 00:18:24.189 { 00:18:24.189 "name": null, 00:18:24.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.189 "is_configured": false, 00:18:24.189 "data_offset": 2048, 00:18:24.189 "data_size": 63488 00:18:24.189 }, 00:18:24.189 { 00:18:24.189 "name": "BaseBdev3", 00:18:24.189 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:24.189 "is_configured": true, 00:18:24.189 "data_offset": 2048, 00:18:24.189 "data_size": 63488 00:18:24.189 }, 00:18:24.189 { 00:18:24.189 "name": "BaseBdev4", 00:18:24.189 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:24.189 "is_configured": true, 00:18:24.189 "data_offset": 2048, 00:18:24.189 "data_size": 63488 00:18:24.189 } 00:18:24.189 ] 00:18:24.189 }' 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.189 14:17:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.449 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.449 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.449 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.449 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.449 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.449 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.449 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.449 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.449 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.708 "name": "raid_bdev1", 00:18:24.708 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:24.708 "strip_size_kb": 0, 00:18:24.708 "state": "online", 00:18:24.708 "raid_level": "raid1", 00:18:24.708 "superblock": true, 00:18:24.708 "num_base_bdevs": 4, 00:18:24.708 "num_base_bdevs_discovered": 2, 00:18:24.708 "num_base_bdevs_operational": 2, 00:18:24.708 "base_bdevs_list": [ 00:18:24.708 { 00:18:24.708 "name": null, 00:18:24.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.708 "is_configured": false, 00:18:24.708 "data_offset": 0, 00:18:24.708 "data_size": 63488 00:18:24.708 }, 00:18:24.708 { 00:18:24.708 "name": null, 00:18:24.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.708 "is_configured": false, 00:18:24.708 "data_offset": 2048, 00:18:24.708 "data_size": 63488 00:18:24.708 }, 00:18:24.708 { 00:18:24.708 "name": "BaseBdev3", 00:18:24.708 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:24.708 "is_configured": true, 00:18:24.708 "data_offset": 2048, 00:18:24.708 "data_size": 63488 00:18:24.708 }, 00:18:24.708 { 00:18:24.708 "name": "BaseBdev4", 00:18:24.708 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:24.708 "is_configured": true, 00:18:24.708 "data_offset": 2048, 00:18:24.708 "data_size": 63488 00:18:24.708 } 00:18:24.708 ] 00:18:24.708 }' 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.708 [2024-11-27 14:17:55.546437] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:24.708 [2024-11-27 14:17:55.546498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.708 [2024-11-27 14:17:55.546517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:24.708 [2024-11-27 14:17:55.546530] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.708 [2024-11-27 14:17:55.546999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.708 [2024-11-27 14:17:55.547024] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:24.708 [2024-11-27 14:17:55.547102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:24.708 [2024-11-27 14:17:55.547138] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:24.708 [2024-11-27 14:17:55.547147] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:24.708 [2024-11-27 14:17:55.547171] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:24.708 BaseBdev1 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.708 14:17:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.660 "name": "raid_bdev1", 00:18:25.660 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:25.660 "strip_size_kb": 0, 00:18:25.660 "state": "online", 00:18:25.660 "raid_level": "raid1", 00:18:25.660 "superblock": true, 00:18:25.660 "num_base_bdevs": 4, 00:18:25.660 "num_base_bdevs_discovered": 2, 00:18:25.660 "num_base_bdevs_operational": 2, 00:18:25.660 "base_bdevs_list": [ 00:18:25.660 { 00:18:25.660 "name": null, 00:18:25.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.660 "is_configured": false, 00:18:25.660 "data_offset": 0, 00:18:25.660 "data_size": 63488 00:18:25.660 }, 00:18:25.660 { 00:18:25.660 "name": null, 00:18:25.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.660 "is_configured": false, 00:18:25.660 "data_offset": 2048, 00:18:25.660 "data_size": 63488 00:18:25.660 }, 00:18:25.660 { 00:18:25.660 "name": "BaseBdev3", 00:18:25.660 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:25.660 "is_configured": true, 00:18:25.660 "data_offset": 2048, 00:18:25.660 "data_size": 63488 00:18:25.660 }, 00:18:25.660 { 00:18:25.660 "name": "BaseBdev4", 00:18:25.660 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:25.660 "is_configured": true, 00:18:25.660 "data_offset": 2048, 00:18:25.660 "data_size": 63488 00:18:25.660 } 00:18:25.660 ] 00:18:25.660 }' 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.660 14:17:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.229 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.229 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.230 "name": "raid_bdev1", 00:18:26.230 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:26.230 "strip_size_kb": 0, 00:18:26.230 "state": "online", 00:18:26.230 "raid_level": "raid1", 00:18:26.230 "superblock": true, 00:18:26.230 "num_base_bdevs": 4, 00:18:26.230 "num_base_bdevs_discovered": 2, 00:18:26.230 "num_base_bdevs_operational": 2, 00:18:26.230 "base_bdevs_list": [ 00:18:26.230 { 00:18:26.230 "name": null, 00:18:26.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.230 "is_configured": false, 00:18:26.230 "data_offset": 0, 00:18:26.230 "data_size": 63488 00:18:26.230 }, 00:18:26.230 { 00:18:26.230 "name": null, 00:18:26.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.230 "is_configured": false, 00:18:26.230 "data_offset": 2048, 00:18:26.230 "data_size": 63488 00:18:26.230 }, 00:18:26.230 { 00:18:26.230 "name": "BaseBdev3", 00:18:26.230 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:26.230 "is_configured": true, 00:18:26.230 "data_offset": 2048, 00:18:26.230 "data_size": 63488 00:18:26.230 }, 00:18:26.230 { 00:18:26.230 "name": "BaseBdev4", 00:18:26.230 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:26.230 "is_configured": true, 00:18:26.230 "data_offset": 2048, 00:18:26.230 "data_size": 63488 00:18:26.230 } 00:18:26.230 ] 00:18:26.230 }' 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.230 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.230 [2024-11-27 14:17:57.179697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.230 [2024-11-27 14:17:57.179921] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:26.230 [2024-11-27 14:17:57.179939] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:26.489 request: 00:18:26.489 { 00:18:26.489 "base_bdev": "BaseBdev1", 00:18:26.489 "raid_bdev": "raid_bdev1", 00:18:26.489 "method": "bdev_raid_add_base_bdev", 00:18:26.489 "req_id": 1 00:18:26.489 } 00:18:26.489 Got JSON-RPC error response 00:18:26.489 response: 00:18:26.489 { 00:18:26.489 "code": -22, 00:18:26.489 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:26.489 } 00:18:26.489 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:26.489 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:26.489 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.489 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.489 14:17:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.489 14:17:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.428 "name": "raid_bdev1", 00:18:27.428 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:27.428 "strip_size_kb": 0, 00:18:27.428 "state": "online", 00:18:27.428 "raid_level": "raid1", 00:18:27.428 "superblock": true, 00:18:27.428 "num_base_bdevs": 4, 00:18:27.428 "num_base_bdevs_discovered": 2, 00:18:27.428 "num_base_bdevs_operational": 2, 00:18:27.428 "base_bdevs_list": [ 00:18:27.428 { 00:18:27.428 "name": null, 00:18:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.428 "is_configured": false, 00:18:27.428 "data_offset": 0, 00:18:27.428 "data_size": 63488 00:18:27.428 }, 00:18:27.428 { 00:18:27.428 "name": null, 00:18:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.428 "is_configured": false, 00:18:27.428 "data_offset": 2048, 00:18:27.428 "data_size": 63488 00:18:27.428 }, 00:18:27.428 { 00:18:27.428 "name": "BaseBdev3", 00:18:27.428 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:27.428 "is_configured": true, 00:18:27.428 "data_offset": 2048, 00:18:27.428 "data_size": 63488 00:18:27.428 }, 00:18:27.428 { 00:18:27.428 "name": "BaseBdev4", 00:18:27.428 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:27.428 "is_configured": true, 00:18:27.428 "data_offset": 2048, 00:18:27.428 "data_size": 63488 00:18:27.428 } 00:18:27.428 ] 00:18:27.428 }' 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.428 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.688 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.688 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.688 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.688 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.688 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.688 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.688 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.688 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.688 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.947 "name": "raid_bdev1", 00:18:27.947 "uuid": "b53e29dd-57c0-421f-ba49-7ce587eacf2e", 00:18:27.947 "strip_size_kb": 0, 00:18:27.947 "state": "online", 00:18:27.947 "raid_level": "raid1", 00:18:27.947 "superblock": true, 00:18:27.947 "num_base_bdevs": 4, 00:18:27.947 "num_base_bdevs_discovered": 2, 00:18:27.947 "num_base_bdevs_operational": 2, 00:18:27.947 "base_bdevs_list": [ 00:18:27.947 { 00:18:27.947 "name": null, 00:18:27.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.947 "is_configured": false, 00:18:27.947 "data_offset": 0, 00:18:27.947 "data_size": 63488 00:18:27.947 }, 00:18:27.947 { 00:18:27.947 "name": null, 00:18:27.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.947 "is_configured": false, 00:18:27.947 "data_offset": 2048, 00:18:27.947 "data_size": 63488 00:18:27.947 }, 00:18:27.947 { 00:18:27.947 "name": "BaseBdev3", 00:18:27.947 "uuid": "60578a20-c35f-5ecc-81a7-8d9f07d6243b", 00:18:27.947 "is_configured": true, 00:18:27.947 "data_offset": 2048, 00:18:27.947 "data_size": 63488 00:18:27.947 }, 00:18:27.947 { 00:18:27.947 "name": "BaseBdev4", 00:18:27.947 "uuid": "f7b496f8-09b1-52b1-95b7-2eab4edfa226", 00:18:27.947 "is_configured": true, 00:18:27.947 "data_offset": 2048, 00:18:27.947 "data_size": 63488 00:18:27.947 } 00:18:27.947 ] 00:18:27.947 }' 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78220 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78220 ']' 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78220 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78220 00:18:27.947 killing process with pid 78220 00:18:27.947 Received shutdown signal, test time was about 60.000000 seconds 00:18:27.947 00:18:27.947 Latency(us) 00:18:27.947 [2024-11-27T14:17:58.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.947 [2024-11-27T14:17:58.903Z] =================================================================================================================== 00:18:27.947 [2024-11-27T14:17:58.903Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78220' 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78220 00:18:27.947 [2024-11-27 14:17:58.817386] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.947 [2024-11-27 14:17:58.817507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.947 14:17:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78220 00:18:27.947 [2024-11-27 14:17:58.817576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.947 [2024-11-27 14:17:58.817585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:28.515 [2024-11-27 14:17:59.315451] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:29.896 00:18:29.896 real 0m25.253s 00:18:29.896 user 0m30.935s 00:18:29.896 sys 0m3.679s 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.896 ************************************ 00:18:29.896 END TEST raid_rebuild_test_sb 00:18:29.896 ************************************ 00:18:29.896 14:18:00 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:18:29.896 14:18:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:29.896 14:18:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.896 14:18:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.896 ************************************ 00:18:29.896 START TEST raid_rebuild_test_io 00:18:29.896 ************************************ 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78979 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78979 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78979 ']' 00:18:29.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.896 14:18:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.896 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:29.896 Zero copy mechanism will not be used. 00:18:29.896 [2024-11-27 14:18:00.597958] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:29.896 [2024-11-27 14:18:00.598090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78979 ] 00:18:29.896 [2024-11-27 14:18:00.770252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.156 [2024-11-27 14:18:00.885535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.156 [2024-11-27 14:18:01.082768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.156 [2024-11-27 14:18:01.082836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.725 BaseBdev1_malloc 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.725 [2024-11-27 14:18:01.488536] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:30.725 [2024-11-27 14:18:01.488649] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.725 [2024-11-27 14:18:01.488676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:30.725 [2024-11-27 14:18:01.488688] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.725 [2024-11-27 14:18:01.490805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.725 [2024-11-27 14:18:01.490847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:30.725 BaseBdev1 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.725 BaseBdev2_malloc 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.725 [2024-11-27 14:18:01.539090] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:30.725 [2024-11-27 14:18:01.539181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.725 [2024-11-27 14:18:01.539209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:30.725 [2024-11-27 14:18:01.539221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.725 [2024-11-27 14:18:01.541541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.725 [2024-11-27 14:18:01.541584] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:30.725 BaseBdev2 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.725 BaseBdev3_malloc 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.725 [2024-11-27 14:18:01.601411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:30.725 [2024-11-27 14:18:01.601470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.725 [2024-11-27 14:18:01.601493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:30.725 [2024-11-27 14:18:01.601504] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.725 [2024-11-27 14:18:01.603590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.725 [2024-11-27 14:18:01.603650] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:30.725 BaseBdev3 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.725 BaseBdev4_malloc 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.725 [2024-11-27 14:18:01.653756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:30.725 [2024-11-27 14:18:01.653818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.725 [2024-11-27 14:18:01.653838] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:30.725 [2024-11-27 14:18:01.653849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.725 [2024-11-27 14:18:01.655914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.725 [2024-11-27 14:18:01.655954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:30.725 BaseBdev4 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.725 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.056 spare_malloc 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.056 spare_delay 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.056 [2024-11-27 14:18:01.713485] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:31.056 [2024-11-27 14:18:01.713597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.056 [2024-11-27 14:18:01.713619] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:31.056 [2024-11-27 14:18:01.713630] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.056 [2024-11-27 14:18:01.715761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.056 [2024-11-27 14:18:01.715799] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:31.056 spare 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.056 [2024-11-27 14:18:01.721507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.056 [2024-11-27 14:18:01.723276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.056 [2024-11-27 14:18:01.723333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.056 [2024-11-27 14:18:01.723382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:31.056 [2024-11-27 14:18:01.723458] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:31.056 [2024-11-27 14:18:01.723471] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:31.056 [2024-11-27 14:18:01.723720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:31.056 [2024-11-27 14:18:01.723880] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:31.056 [2024-11-27 14:18:01.723904] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:31.056 [2024-11-27 14:18:01.724070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.056 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.057 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.057 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.057 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.057 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.057 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.057 "name": "raid_bdev1", 00:18:31.057 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:31.057 "strip_size_kb": 0, 00:18:31.057 "state": "online", 00:18:31.057 "raid_level": "raid1", 00:18:31.057 "superblock": false, 00:18:31.057 "num_base_bdevs": 4, 00:18:31.057 "num_base_bdevs_discovered": 4, 00:18:31.057 "num_base_bdevs_operational": 4, 00:18:31.057 "base_bdevs_list": [ 00:18:31.057 { 00:18:31.057 "name": "BaseBdev1", 00:18:31.057 "uuid": "b2398367-eb0e-5705-bc86-887f383f864d", 00:18:31.057 "is_configured": true, 00:18:31.057 "data_offset": 0, 00:18:31.057 "data_size": 65536 00:18:31.057 }, 00:18:31.057 { 00:18:31.057 "name": "BaseBdev2", 00:18:31.057 "uuid": "3570e7cf-8094-546b-8032-260ddc488299", 00:18:31.057 "is_configured": true, 00:18:31.057 "data_offset": 0, 00:18:31.057 "data_size": 65536 00:18:31.057 }, 00:18:31.057 { 00:18:31.057 "name": "BaseBdev3", 00:18:31.057 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:31.057 "is_configured": true, 00:18:31.057 "data_offset": 0, 00:18:31.057 "data_size": 65536 00:18:31.057 }, 00:18:31.057 { 00:18:31.057 "name": "BaseBdev4", 00:18:31.057 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:31.057 "is_configured": true, 00:18:31.057 "data_offset": 0, 00:18:31.057 "data_size": 65536 00:18:31.057 } 00:18:31.057 ] 00:18:31.057 }' 00:18:31.057 14:18:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.057 14:18:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.324 [2024-11-27 14:18:02.185115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:31.324 [2024-11-27 14:18:02.264599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.324 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.583 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.583 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.584 "name": "raid_bdev1", 00:18:31.584 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:31.584 "strip_size_kb": 0, 00:18:31.584 "state": "online", 00:18:31.584 "raid_level": "raid1", 00:18:31.584 "superblock": false, 00:18:31.584 "num_base_bdevs": 4, 00:18:31.584 "num_base_bdevs_discovered": 3, 00:18:31.584 "num_base_bdevs_operational": 3, 00:18:31.584 "base_bdevs_list": [ 00:18:31.584 { 00:18:31.584 "name": null, 00:18:31.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.584 "is_configured": false, 00:18:31.584 "data_offset": 0, 00:18:31.584 "data_size": 65536 00:18:31.584 }, 00:18:31.584 { 00:18:31.584 "name": "BaseBdev2", 00:18:31.584 "uuid": "3570e7cf-8094-546b-8032-260ddc488299", 00:18:31.584 "is_configured": true, 00:18:31.584 "data_offset": 0, 00:18:31.584 "data_size": 65536 00:18:31.584 }, 00:18:31.584 { 00:18:31.584 "name": "BaseBdev3", 00:18:31.584 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:31.584 "is_configured": true, 00:18:31.584 "data_offset": 0, 00:18:31.584 "data_size": 65536 00:18:31.584 }, 00:18:31.584 { 00:18:31.584 "name": "BaseBdev4", 00:18:31.584 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:31.584 "is_configured": true, 00:18:31.584 "data_offset": 0, 00:18:31.584 "data_size": 65536 00:18:31.584 } 00:18:31.584 ] 00:18:31.584 }' 00:18:31.584 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.584 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.584 [2024-11-27 14:18:02.373229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:31.584 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:31.584 Zero copy mechanism will not be used. 00:18:31.584 Running I/O for 60 seconds... 00:18:31.843 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:31.843 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.843 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.843 [2024-11-27 14:18:02.700048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.843 14:18:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.843 14:18:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:31.843 [2024-11-27 14:18:02.791954] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:31.843 [2024-11-27 14:18:02.794256] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.102 [2024-11-27 14:18:02.901974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:32.103 [2024-11-27 14:18:02.903651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:32.362 [2024-11-27 14:18:03.145884] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:32.362 [2024-11-27 14:18:03.146787] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:32.621 166.00 IOPS, 498.00 MiB/s [2024-11-27T14:18:03.577Z] [2024-11-27 14:18:03.485734] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:32.621 [2024-11-27 14:18:03.492272] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:32.881 [2024-11-27 14:18:03.717924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:32.881 [2024-11-27 14:18:03.718888] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.881 "name": "raid_bdev1", 00:18:32.881 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:32.881 "strip_size_kb": 0, 00:18:32.881 "state": "online", 00:18:32.881 "raid_level": "raid1", 00:18:32.881 "superblock": false, 00:18:32.881 "num_base_bdevs": 4, 00:18:32.881 "num_base_bdevs_discovered": 4, 00:18:32.881 "num_base_bdevs_operational": 4, 00:18:32.881 "process": { 00:18:32.881 "type": "rebuild", 00:18:32.881 "target": "spare", 00:18:32.881 "progress": { 00:18:32.881 "blocks": 10240, 00:18:32.881 "percent": 15 00:18:32.881 } 00:18:32.881 }, 00:18:32.881 "base_bdevs_list": [ 00:18:32.881 { 00:18:32.881 "name": "spare", 00:18:32.881 "uuid": "a8f56684-b18c-58d7-850d-353da74f68d5", 00:18:32.881 "is_configured": true, 00:18:32.881 "data_offset": 0, 00:18:32.881 "data_size": 65536 00:18:32.881 }, 00:18:32.881 { 00:18:32.881 "name": "BaseBdev2", 00:18:32.881 "uuid": "3570e7cf-8094-546b-8032-260ddc488299", 00:18:32.881 "is_configured": true, 00:18:32.881 "data_offset": 0, 00:18:32.881 "data_size": 65536 00:18:32.881 }, 00:18:32.881 { 00:18:32.881 "name": "BaseBdev3", 00:18:32.881 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:32.881 "is_configured": true, 00:18:32.881 "data_offset": 0, 00:18:32.881 "data_size": 65536 00:18:32.881 }, 00:18:32.881 { 00:18:32.881 "name": "BaseBdev4", 00:18:32.881 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:32.881 "is_configured": true, 00:18:32.881 "data_offset": 0, 00:18:32.881 "data_size": 65536 00:18:32.881 } 00:18:32.881 ] 00:18:32.881 }' 00:18:32.881 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.140 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.140 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.140 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.140 14:18:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:33.140 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.140 14:18:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.140 [2024-11-27 14:18:03.913620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.140 [2024-11-27 14:18:04.060480] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:33.140 [2024-11-27 14:18:04.066297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.140 [2024-11-27 14:18:04.066359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.140 [2024-11-27 14:18:04.066378] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:33.398 [2024-11-27 14:18:04.099685] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:18:33.398 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.398 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:33.398 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.398 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.398 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.399 "name": "raid_bdev1", 00:18:33.399 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:33.399 "strip_size_kb": 0, 00:18:33.399 "state": "online", 00:18:33.399 "raid_level": "raid1", 00:18:33.399 "superblock": false, 00:18:33.399 "num_base_bdevs": 4, 00:18:33.399 "num_base_bdevs_discovered": 3, 00:18:33.399 "num_base_bdevs_operational": 3, 00:18:33.399 "base_bdevs_list": [ 00:18:33.399 { 00:18:33.399 "name": null, 00:18:33.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.399 "is_configured": false, 00:18:33.399 "data_offset": 0, 00:18:33.399 "data_size": 65536 00:18:33.399 }, 00:18:33.399 { 00:18:33.399 "name": "BaseBdev2", 00:18:33.399 "uuid": "3570e7cf-8094-546b-8032-260ddc488299", 00:18:33.399 "is_configured": true, 00:18:33.399 "data_offset": 0, 00:18:33.399 "data_size": 65536 00:18:33.399 }, 00:18:33.399 { 00:18:33.399 "name": "BaseBdev3", 00:18:33.399 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:33.399 "is_configured": true, 00:18:33.399 "data_offset": 0, 00:18:33.399 "data_size": 65536 00:18:33.399 }, 00:18:33.399 { 00:18:33.399 "name": "BaseBdev4", 00:18:33.399 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:33.399 "is_configured": true, 00:18:33.399 "data_offset": 0, 00:18:33.399 "data_size": 65536 00:18:33.399 } 00:18:33.399 ] 00:18:33.399 }' 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.399 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.658 141.50 IOPS, 424.50 MiB/s [2024-11-27T14:18:04.614Z] 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:33.658 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.658 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:33.658 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:33.658 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.658 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.658 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.658 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.658 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.658 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.658 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.658 "name": "raid_bdev1", 00:18:33.658 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:33.658 "strip_size_kb": 0, 00:18:33.658 "state": "online", 00:18:33.658 "raid_level": "raid1", 00:18:33.658 "superblock": false, 00:18:33.658 "num_base_bdevs": 4, 00:18:33.658 "num_base_bdevs_discovered": 3, 00:18:33.658 "num_base_bdevs_operational": 3, 00:18:33.658 "base_bdevs_list": [ 00:18:33.658 { 00:18:33.658 "name": null, 00:18:33.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.658 "is_configured": false, 00:18:33.658 "data_offset": 0, 00:18:33.658 "data_size": 65536 00:18:33.658 }, 00:18:33.658 { 00:18:33.658 "name": "BaseBdev2", 00:18:33.658 "uuid": "3570e7cf-8094-546b-8032-260ddc488299", 00:18:33.658 "is_configured": true, 00:18:33.658 "data_offset": 0, 00:18:33.658 "data_size": 65536 00:18:33.658 }, 00:18:33.658 { 00:18:33.658 "name": "BaseBdev3", 00:18:33.658 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:33.658 "is_configured": true, 00:18:33.658 "data_offset": 0, 00:18:33.658 "data_size": 65536 00:18:33.658 }, 00:18:33.658 { 00:18:33.658 "name": "BaseBdev4", 00:18:33.658 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:33.658 "is_configured": true, 00:18:33.658 "data_offset": 0, 00:18:33.658 "data_size": 65536 00:18:33.658 } 00:18:33.658 ] 00:18:33.658 }' 00:18:33.658 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.917 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:33.917 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.917 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:33.917 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:33.917 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.917 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.917 [2024-11-27 14:18:04.681029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:33.917 14:18:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.917 14:18:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:33.917 [2024-11-27 14:18:04.745504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:33.917 [2024-11-27 14:18:04.747804] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:34.176 [2024-11-27 14:18:04.871344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:34.176 [2024-11-27 14:18:05.009092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:34.176 [2024-11-27 14:18:05.009471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:34.442 [2024-11-27 14:18:05.234966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:34.442 [2024-11-27 14:18:05.336548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:34.702 145.67 IOPS, 437.00 MiB/s [2024-11-27T14:18:05.658Z] [2024-11-27 14:18:05.556666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:34.702 [2024-11-27 14:18:05.558330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.961 [2024-11-27 14:18:05.785912] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.961 "name": "raid_bdev1", 00:18:34.961 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:34.961 "strip_size_kb": 0, 00:18:34.961 "state": "online", 00:18:34.961 "raid_level": "raid1", 00:18:34.961 "superblock": false, 00:18:34.961 "num_base_bdevs": 4, 00:18:34.961 "num_base_bdevs_discovered": 4, 00:18:34.961 "num_base_bdevs_operational": 4, 00:18:34.961 "process": { 00:18:34.961 "type": "rebuild", 00:18:34.961 "target": "spare", 00:18:34.961 "progress": { 00:18:34.961 "blocks": 14336, 00:18:34.961 "percent": 21 00:18:34.961 } 00:18:34.961 }, 00:18:34.961 "base_bdevs_list": [ 00:18:34.961 { 00:18:34.961 "name": "spare", 00:18:34.961 "uuid": "a8f56684-b18c-58d7-850d-353da74f68d5", 00:18:34.961 "is_configured": true, 00:18:34.961 "data_offset": 0, 00:18:34.961 "data_size": 65536 00:18:34.961 }, 00:18:34.961 { 00:18:34.961 "name": "BaseBdev2", 00:18:34.961 "uuid": "3570e7cf-8094-546b-8032-260ddc488299", 00:18:34.961 "is_configured": true, 00:18:34.961 "data_offset": 0, 00:18:34.961 "data_size": 65536 00:18:34.961 }, 00:18:34.961 { 00:18:34.961 "name": "BaseBdev3", 00:18:34.961 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:34.961 "is_configured": true, 00:18:34.961 "data_offset": 0, 00:18:34.961 "data_size": 65536 00:18:34.961 }, 00:18:34.961 { 00:18:34.961 "name": "BaseBdev4", 00:18:34.961 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:34.961 "is_configured": true, 00:18:34.961 "data_offset": 0, 00:18:34.961 "data_size": 65536 00:18:34.961 } 00:18:34.961 ] 00:18:34.961 }' 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.961 14:18:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.961 [2024-11-27 14:18:05.910394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:35.221 [2024-11-27 14:18:06.023395] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:18:35.221 [2024-11-27 14:18:06.023430] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.221 "name": "raid_bdev1", 00:18:35.221 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:35.221 "strip_size_kb": 0, 00:18:35.221 "state": "online", 00:18:35.221 "raid_level": "raid1", 00:18:35.221 "superblock": false, 00:18:35.221 "num_base_bdevs": 4, 00:18:35.221 "num_base_bdevs_discovered": 3, 00:18:35.221 "num_base_bdevs_operational": 3, 00:18:35.221 "process": { 00:18:35.221 "type": "rebuild", 00:18:35.221 "target": "spare", 00:18:35.221 "progress": { 00:18:35.221 "blocks": 18432, 00:18:35.221 "percent": 28 00:18:35.221 } 00:18:35.221 }, 00:18:35.221 "base_bdevs_list": [ 00:18:35.221 { 00:18:35.221 "name": "spare", 00:18:35.221 "uuid": "a8f56684-b18c-58d7-850d-353da74f68d5", 00:18:35.221 "is_configured": true, 00:18:35.221 "data_offset": 0, 00:18:35.221 "data_size": 65536 00:18:35.221 }, 00:18:35.221 { 00:18:35.221 "name": null, 00:18:35.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.221 "is_configured": false, 00:18:35.221 "data_offset": 0, 00:18:35.221 "data_size": 65536 00:18:35.221 }, 00:18:35.221 { 00:18:35.221 "name": "BaseBdev3", 00:18:35.221 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:35.221 "is_configured": true, 00:18:35.221 "data_offset": 0, 00:18:35.221 "data_size": 65536 00:18:35.221 }, 00:18:35.221 { 00:18:35.221 "name": "BaseBdev4", 00:18:35.221 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:35.221 "is_configured": true, 00:18:35.221 "data_offset": 0, 00:18:35.221 "data_size": 65536 00:18:35.221 } 00:18:35.221 ] 00:18:35.221 }' 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.221 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.221 [2024-11-27 14:18:06.144070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=494 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.481 "name": "raid_bdev1", 00:18:35.481 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:35.481 "strip_size_kb": 0, 00:18:35.481 "state": "online", 00:18:35.481 "raid_level": "raid1", 00:18:35.481 "superblock": false, 00:18:35.481 "num_base_bdevs": 4, 00:18:35.481 "num_base_bdevs_discovered": 3, 00:18:35.481 "num_base_bdevs_operational": 3, 00:18:35.481 "process": { 00:18:35.481 "type": "rebuild", 00:18:35.481 "target": "spare", 00:18:35.481 "progress": { 00:18:35.481 "blocks": 20480, 00:18:35.481 "percent": 31 00:18:35.481 } 00:18:35.481 }, 00:18:35.481 "base_bdevs_list": [ 00:18:35.481 { 00:18:35.481 "name": "spare", 00:18:35.481 "uuid": "a8f56684-b18c-58d7-850d-353da74f68d5", 00:18:35.481 "is_configured": true, 00:18:35.481 "data_offset": 0, 00:18:35.481 "data_size": 65536 00:18:35.481 }, 00:18:35.481 { 00:18:35.481 "name": null, 00:18:35.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.481 "is_configured": false, 00:18:35.481 "data_offset": 0, 00:18:35.481 "data_size": 65536 00:18:35.481 }, 00:18:35.481 { 00:18:35.481 "name": "BaseBdev3", 00:18:35.481 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:35.481 "is_configured": true, 00:18:35.481 "data_offset": 0, 00:18:35.481 "data_size": 65536 00:18:35.481 }, 00:18:35.481 { 00:18:35.481 "name": "BaseBdev4", 00:18:35.481 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:35.481 "is_configured": true, 00:18:35.481 "data_offset": 0, 00:18:35.481 "data_size": 65536 00:18:35.481 } 00:18:35.481 ] 00:18:35.481 }' 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.481 14:18:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:35.481 [2024-11-27 14:18:06.360197] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:35.740 132.50 IOPS, 397.50 MiB/s [2024-11-27T14:18:06.696Z] [2024-11-27 14:18:06.689604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.676 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.676 "name": "raid_bdev1", 00:18:36.676 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:36.676 "strip_size_kb": 0, 00:18:36.676 "state": "online", 00:18:36.676 "raid_level": "raid1", 00:18:36.676 "superblock": false, 00:18:36.676 "num_base_bdevs": 4, 00:18:36.676 "num_base_bdevs_discovered": 3, 00:18:36.676 "num_base_bdevs_operational": 3, 00:18:36.676 "process": { 00:18:36.676 "type": "rebuild", 00:18:36.676 "target": "spare", 00:18:36.676 "progress": { 00:18:36.676 "blocks": 36864, 00:18:36.676 "percent": 56 00:18:36.676 } 00:18:36.676 }, 00:18:36.676 "base_bdevs_list": [ 00:18:36.676 { 00:18:36.676 "name": "spare", 00:18:36.676 "uuid": "a8f56684-b18c-58d7-850d-353da74f68d5", 00:18:36.676 "is_configured": true, 00:18:36.676 "data_offset": 0, 00:18:36.676 "data_size": 65536 00:18:36.676 }, 00:18:36.676 { 00:18:36.676 "name": null, 00:18:36.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.676 "is_configured": false, 00:18:36.676 "data_offset": 0, 00:18:36.676 "data_size": 65536 00:18:36.676 }, 00:18:36.676 { 00:18:36.676 "name": "BaseBdev3", 00:18:36.676 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:36.676 "is_configured": true, 00:18:36.676 "data_offset": 0, 00:18:36.676 "data_size": 65536 00:18:36.676 }, 00:18:36.676 { 00:18:36.676 "name": "BaseBdev4", 00:18:36.677 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:36.677 "is_configured": true, 00:18:36.677 "data_offset": 0, 00:18:36.677 "data_size": 65536 00:18:36.677 } 00:18:36.677 ] 00:18:36.677 }' 00:18:36.677 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.677 118.00 IOPS, 354.00 MiB/s [2024-11-27T14:18:07.633Z] 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.677 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.677 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.677 14:18:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.677 [2024-11-27 14:18:07.497241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:36.936 [2024-11-27 14:18:07.722599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:37.763 104.33 IOPS, 313.00 MiB/s [2024-11-27T14:18:08.719Z] 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.763 "name": "raid_bdev1", 00:18:37.763 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:37.763 "strip_size_kb": 0, 00:18:37.763 "state": "online", 00:18:37.763 "raid_level": "raid1", 00:18:37.763 "superblock": false, 00:18:37.763 "num_base_bdevs": 4, 00:18:37.763 "num_base_bdevs_discovered": 3, 00:18:37.763 "num_base_bdevs_operational": 3, 00:18:37.763 "process": { 00:18:37.763 "type": "rebuild", 00:18:37.763 "target": "spare", 00:18:37.763 "progress": { 00:18:37.763 "blocks": 57344, 00:18:37.763 "percent": 87 00:18:37.763 } 00:18:37.763 }, 00:18:37.763 "base_bdevs_list": [ 00:18:37.763 { 00:18:37.763 "name": "spare", 00:18:37.763 "uuid": "a8f56684-b18c-58d7-850d-353da74f68d5", 00:18:37.763 "is_configured": true, 00:18:37.763 "data_offset": 0, 00:18:37.763 "data_size": 65536 00:18:37.763 }, 00:18:37.763 { 00:18:37.763 "name": null, 00:18:37.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.763 "is_configured": false, 00:18:37.763 "data_offset": 0, 00:18:37.763 "data_size": 65536 00:18:37.763 }, 00:18:37.763 { 00:18:37.763 "name": "BaseBdev3", 00:18:37.763 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:37.763 "is_configured": true, 00:18:37.763 "data_offset": 0, 00:18:37.763 "data_size": 65536 00:18:37.763 }, 00:18:37.763 { 00:18:37.763 "name": "BaseBdev4", 00:18:37.763 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:37.763 "is_configured": true, 00:18:37.763 "data_offset": 0, 00:18:37.763 "data_size": 65536 00:18:37.763 } 00:18:37.763 ] 00:18:37.763 }' 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.763 14:18:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.023 [2024-11-27 14:18:08.820603] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:38.023 [2024-11-27 14:18:08.920511] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:38.023 [2024-11-27 14:18:08.923140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.848 93.43 IOPS, 280.29 MiB/s [2024-11-27T14:18:09.804Z] 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.848 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.848 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.848 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.848 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.848 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.849 "name": "raid_bdev1", 00:18:38.849 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:38.849 "strip_size_kb": 0, 00:18:38.849 "state": "online", 00:18:38.849 "raid_level": "raid1", 00:18:38.849 "superblock": false, 00:18:38.849 "num_base_bdevs": 4, 00:18:38.849 "num_base_bdevs_discovered": 3, 00:18:38.849 "num_base_bdevs_operational": 3, 00:18:38.849 "base_bdevs_list": [ 00:18:38.849 { 00:18:38.849 "name": "spare", 00:18:38.849 "uuid": "a8f56684-b18c-58d7-850d-353da74f68d5", 00:18:38.849 "is_configured": true, 00:18:38.849 "data_offset": 0, 00:18:38.849 "data_size": 65536 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "name": null, 00:18:38.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.849 "is_configured": false, 00:18:38.849 "data_offset": 0, 00:18:38.849 "data_size": 65536 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "name": "BaseBdev3", 00:18:38.849 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:38.849 "is_configured": true, 00:18:38.849 "data_offset": 0, 00:18:38.849 "data_size": 65536 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "name": "BaseBdev4", 00:18:38.849 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:38.849 "is_configured": true, 00:18:38.849 "data_offset": 0, 00:18:38.849 "data_size": 65536 00:18:38.849 } 00:18:38.849 ] 00:18:38.849 }' 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.849 "name": "raid_bdev1", 00:18:38.849 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:38.849 "strip_size_kb": 0, 00:18:38.849 "state": "online", 00:18:38.849 "raid_level": "raid1", 00:18:38.849 "superblock": false, 00:18:38.849 "num_base_bdevs": 4, 00:18:38.849 "num_base_bdevs_discovered": 3, 00:18:38.849 "num_base_bdevs_operational": 3, 00:18:38.849 "base_bdevs_list": [ 00:18:38.849 { 00:18:38.849 "name": "spare", 00:18:38.849 "uuid": "a8f56684-b18c-58d7-850d-353da74f68d5", 00:18:38.849 "is_configured": true, 00:18:38.849 "data_offset": 0, 00:18:38.849 "data_size": 65536 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "name": null, 00:18:38.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.849 "is_configured": false, 00:18:38.849 "data_offset": 0, 00:18:38.849 "data_size": 65536 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "name": "BaseBdev3", 00:18:38.849 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:38.849 "is_configured": true, 00:18:38.849 "data_offset": 0, 00:18:38.849 "data_size": 65536 00:18:38.849 }, 00:18:38.849 { 00:18:38.849 "name": "BaseBdev4", 00:18:38.849 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:38.849 "is_configured": true, 00:18:38.849 "data_offset": 0, 00:18:38.849 "data_size": 65536 00:18:38.849 } 00:18:38.849 ] 00:18:38.849 }' 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.849 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.109 "name": "raid_bdev1", 00:18:39.109 "uuid": "4a0bea10-cb14-4795-b571-0e4e9e5e22b4", 00:18:39.109 "strip_size_kb": 0, 00:18:39.109 "state": "online", 00:18:39.109 "raid_level": "raid1", 00:18:39.109 "superblock": false, 00:18:39.109 "num_base_bdevs": 4, 00:18:39.109 "num_base_bdevs_discovered": 3, 00:18:39.109 "num_base_bdevs_operational": 3, 00:18:39.109 "base_bdevs_list": [ 00:18:39.109 { 00:18:39.109 "name": "spare", 00:18:39.109 "uuid": "a8f56684-b18c-58d7-850d-353da74f68d5", 00:18:39.109 "is_configured": true, 00:18:39.109 "data_offset": 0, 00:18:39.109 "data_size": 65536 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "name": null, 00:18:39.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.109 "is_configured": false, 00:18:39.109 "data_offset": 0, 00:18:39.109 "data_size": 65536 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "name": "BaseBdev3", 00:18:39.109 "uuid": "3e0b7e19-7be0-5687-a36a-367799075859", 00:18:39.109 "is_configured": true, 00:18:39.109 "data_offset": 0, 00:18:39.109 "data_size": 65536 00:18:39.109 }, 00:18:39.109 { 00:18:39.109 "name": "BaseBdev4", 00:18:39.109 "uuid": "d57e8081-e3e5-5fee-96e2-9263f5ebd836", 00:18:39.109 "is_configured": true, 00:18:39.109 "data_offset": 0, 00:18:39.109 "data_size": 65536 00:18:39.109 } 00:18:39.109 ] 00:18:39.109 }' 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.109 14:18:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.368 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:39.368 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.368 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.368 [2024-11-27 14:18:10.200045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.368 [2024-11-27 14:18:10.200160] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.368 00:18:39.368 Latency(us) 00:18:39.368 [2024-11-27T14:18:10.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.368 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:39.368 raid_bdev1 : 7.88 87.01 261.04 0.00 0.00 16331.74 329.11 119968.08 00:18:39.368 [2024-11-27T14:18:10.324Z] =================================================================================================================== 00:18:39.368 [2024-11-27T14:18:10.324Z] Total : 87.01 261.04 0.00 0.00 16331.74 329.11 119968.08 00:18:39.368 [2024-11-27 14:18:10.266462] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.368 [2024-11-27 14:18:10.266584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.368 [2024-11-27 14:18:10.266727] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.368 [2024-11-27 14:18:10.266778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:39.368 { 00:18:39.368 "results": [ 00:18:39.368 { 00:18:39.368 "job": "raid_bdev1", 00:18:39.368 "core_mask": "0x1", 00:18:39.368 "workload": "randrw", 00:18:39.368 "percentage": 50, 00:18:39.368 "status": "finished", 00:18:39.368 "queue_depth": 2, 00:18:39.368 "io_size": 3145728, 00:18:39.368 "runtime": 7.883733, 00:18:39.368 "iops": 87.01461604546984, 00:18:39.368 "mibps": 261.0438481364095, 00:18:39.368 "io_failed": 0, 00:18:39.368 "io_timeout": 0, 00:18:39.368 "avg_latency_us": 16331.74076158224, 00:18:39.368 "min_latency_us": 329.1109170305677, 00:18:39.368 "max_latency_us": 119968.08384279476 00:18:39.368 } 00:18:39.368 ], 00:18:39.368 "core_count": 1 00:18:39.368 } 00:18:39.368 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.368 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.368 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:39.368 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.368 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.368 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:39.627 /dev/nbd0 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:39.627 1+0 records in 00:18:39.627 1+0 records out 00:18:39.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485401 s, 8.4 MB/s 00:18:39.627 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:39.887 /dev/nbd1 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:39.887 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:40.146 1+0 records in 00:18:40.146 1+0 records out 00:18:40.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385127 s, 10.6 MB/s 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.146 14:18:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:40.146 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:40.146 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.146 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:40.146 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.146 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:40.146 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.146 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.406 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:40.695 /dev/nbd1 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:40.695 1+0 records in 00:18:40.695 1+0 records out 00:18:40.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554371 s, 7.4 MB/s 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.695 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.956 14:18:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78979 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78979 ']' 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78979 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78979 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78979' 00:18:41.215 killing process with pid 78979 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78979 00:18:41.215 Received shutdown signal, test time was about 9.733492 seconds 00:18:41.215 00:18:41.215 Latency(us) 00:18:41.215 [2024-11-27T14:18:12.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.215 [2024-11-27T14:18:12.171Z] =================================================================================================================== 00:18:41.215 [2024-11-27T14:18:12.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.215 [2024-11-27 14:18:12.090161] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:41.215 14:18:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78979 00:18:41.782 [2024-11-27 14:18:12.518837] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:43.160 00:18:43.160 real 0m13.276s 00:18:43.160 user 0m16.711s 00:18:43.160 sys 0m1.734s 00:18:43.160 ************************************ 00:18:43.160 END TEST raid_rebuild_test_io 00:18:43.160 ************************************ 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.160 14:18:13 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:18:43.160 14:18:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:43.160 14:18:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.160 14:18:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.160 ************************************ 00:18:43.160 START TEST raid_rebuild_test_sb_io 00:18:43.160 ************************************ 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79388 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79388 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79388 ']' 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.160 14:18:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.160 [2024-11-27 14:18:13.934643] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:18:43.160 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:43.160 Zero copy mechanism will not be used. 00:18:43.160 [2024-11-27 14:18:13.934841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79388 ] 00:18:43.420 [2024-11-27 14:18:14.112661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.420 [2024-11-27 14:18:14.233583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.679 [2024-11-27 14:18:14.435163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.679 [2024-11-27 14:18:14.435231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.938 BaseBdev1_malloc 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.938 [2024-11-27 14:18:14.838269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:43.938 [2024-11-27 14:18:14.838333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.938 [2024-11-27 14:18:14.838356] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:43.938 [2024-11-27 14:18:14.838368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.938 [2024-11-27 14:18:14.840439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.938 [2024-11-27 14:18:14.840482] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:43.938 BaseBdev1 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.938 BaseBdev2_malloc 00:18:43.938 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.939 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:43.939 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.939 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.198 [2024-11-27 14:18:14.893896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:44.198 [2024-11-27 14:18:14.893967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.198 [2024-11-27 14:18:14.893992] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:44.198 [2024-11-27 14:18:14.894003] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.198 [2024-11-27 14:18:14.896175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.198 [2024-11-27 14:18:14.896214] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:44.198 BaseBdev2 00:18:44.198 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.198 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:44.198 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:44.198 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.198 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.198 BaseBdev3_malloc 00:18:44.198 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.198 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:44.198 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.198 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.198 [2024-11-27 14:18:14.957001] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:44.198 [2024-11-27 14:18:14.957072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.198 [2024-11-27 14:18:14.957094] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:44.198 [2024-11-27 14:18:14.957105] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.198 [2024-11-27 14:18:14.959324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.198 [2024-11-27 14:18:14.959362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:44.198 BaseBdev3 00:18:44.199 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.199 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:44.199 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:44.199 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.199 14:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.199 BaseBdev4_malloc 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.199 [2024-11-27 14:18:15.011384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:44.199 [2024-11-27 14:18:15.011520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.199 [2024-11-27 14:18:15.011554] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:44.199 [2024-11-27 14:18:15.011569] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.199 [2024-11-27 14:18:15.013904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.199 [2024-11-27 14:18:15.013956] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:44.199 BaseBdev4 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.199 spare_malloc 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.199 spare_delay 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.199 [2024-11-27 14:18:15.078780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:44.199 [2024-11-27 14:18:15.078835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.199 [2024-11-27 14:18:15.078853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:44.199 [2024-11-27 14:18:15.078863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.199 [2024-11-27 14:18:15.080943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.199 [2024-11-27 14:18:15.080987] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:44.199 spare 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.199 [2024-11-27 14:18:15.090799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:44.199 [2024-11-27 14:18:15.092599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:44.199 [2024-11-27 14:18:15.092665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:44.199 [2024-11-27 14:18:15.092718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:44.199 [2024-11-27 14:18:15.092915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:44.199 [2024-11-27 14:18:15.092931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:44.199 [2024-11-27 14:18:15.093209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:44.199 [2024-11-27 14:18:15.093381] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:44.199 [2024-11-27 14:18:15.093402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:44.199 [2024-11-27 14:18:15.093564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.199 "name": "raid_bdev1", 00:18:44.199 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:44.199 "strip_size_kb": 0, 00:18:44.199 "state": "online", 00:18:44.199 "raid_level": "raid1", 00:18:44.199 "superblock": true, 00:18:44.199 "num_base_bdevs": 4, 00:18:44.199 "num_base_bdevs_discovered": 4, 00:18:44.199 "num_base_bdevs_operational": 4, 00:18:44.199 "base_bdevs_list": [ 00:18:44.199 { 00:18:44.199 "name": "BaseBdev1", 00:18:44.199 "uuid": "22b771fd-b7fd-57a1-a656-b901e8542a03", 00:18:44.199 "is_configured": true, 00:18:44.199 "data_offset": 2048, 00:18:44.199 "data_size": 63488 00:18:44.199 }, 00:18:44.199 { 00:18:44.199 "name": "BaseBdev2", 00:18:44.199 "uuid": "3eaeb4c8-4cb9-5a8e-a77d-5019367aad31", 00:18:44.199 "is_configured": true, 00:18:44.199 "data_offset": 2048, 00:18:44.199 "data_size": 63488 00:18:44.199 }, 00:18:44.199 { 00:18:44.199 "name": "BaseBdev3", 00:18:44.199 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:44.199 "is_configured": true, 00:18:44.199 "data_offset": 2048, 00:18:44.199 "data_size": 63488 00:18:44.199 }, 00:18:44.199 { 00:18:44.199 "name": "BaseBdev4", 00:18:44.199 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:44.199 "is_configured": true, 00:18:44.199 "data_offset": 2048, 00:18:44.199 "data_size": 63488 00:18:44.199 } 00:18:44.199 ] 00:18:44.199 }' 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.199 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.766 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.767 [2024-11-27 14:18:15.498436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.767 [2024-11-27 14:18:15.585920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.767 "name": "raid_bdev1", 00:18:44.767 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:44.767 "strip_size_kb": 0, 00:18:44.767 "state": "online", 00:18:44.767 "raid_level": "raid1", 00:18:44.767 "superblock": true, 00:18:44.767 "num_base_bdevs": 4, 00:18:44.767 "num_base_bdevs_discovered": 3, 00:18:44.767 "num_base_bdevs_operational": 3, 00:18:44.767 "base_bdevs_list": [ 00:18:44.767 { 00:18:44.767 "name": null, 00:18:44.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.767 "is_configured": false, 00:18:44.767 "data_offset": 0, 00:18:44.767 "data_size": 63488 00:18:44.767 }, 00:18:44.767 { 00:18:44.767 "name": "BaseBdev2", 00:18:44.767 "uuid": "3eaeb4c8-4cb9-5a8e-a77d-5019367aad31", 00:18:44.767 "is_configured": true, 00:18:44.767 "data_offset": 2048, 00:18:44.767 "data_size": 63488 00:18:44.767 }, 00:18:44.767 { 00:18:44.767 "name": "BaseBdev3", 00:18:44.767 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:44.767 "is_configured": true, 00:18:44.767 "data_offset": 2048, 00:18:44.767 "data_size": 63488 00:18:44.767 }, 00:18:44.767 { 00:18:44.767 "name": "BaseBdev4", 00:18:44.767 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:44.767 "is_configured": true, 00:18:44.767 "data_offset": 2048, 00:18:44.767 "data_size": 63488 00:18:44.767 } 00:18:44.767 ] 00:18:44.767 }' 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.767 14:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.051 [2024-11-27 14:18:15.733859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:45.051 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:45.051 Zero copy mechanism will not be used. 00:18:45.051 Running I/O for 60 seconds... 00:18:45.309 14:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:45.309 14:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.309 14:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.309 [2024-11-27 14:18:16.037966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.309 14:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.309 14:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:45.309 [2024-11-27 14:18:16.100897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:45.309 [2024-11-27 14:18:16.103105] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:45.309 [2024-11-27 14:18:16.220660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:45.309 [2024-11-27 14:18:16.221398] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:45.567 [2024-11-27 14:18:16.435391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:45.567 [2024-11-27 14:18:16.436344] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:46.083 118.00 IOPS, 354.00 MiB/s [2024-11-27T14:18:17.039Z] [2024-11-27 14:18:16.785143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:46.083 [2024-11-27 14:18:16.786785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:46.083 [2024-11-27 14:18:17.022318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.342 "name": "raid_bdev1", 00:18:46.342 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:46.342 "strip_size_kb": 0, 00:18:46.342 "state": "online", 00:18:46.342 "raid_level": "raid1", 00:18:46.342 "superblock": true, 00:18:46.342 "num_base_bdevs": 4, 00:18:46.342 "num_base_bdevs_discovered": 4, 00:18:46.342 "num_base_bdevs_operational": 4, 00:18:46.342 "process": { 00:18:46.342 "type": "rebuild", 00:18:46.342 "target": "spare", 00:18:46.342 "progress": { 00:18:46.342 "blocks": 10240, 00:18:46.342 "percent": 16 00:18:46.342 } 00:18:46.342 }, 00:18:46.342 "base_bdevs_list": [ 00:18:46.342 { 00:18:46.342 "name": "spare", 00:18:46.342 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:46.342 "is_configured": true, 00:18:46.342 "data_offset": 2048, 00:18:46.342 "data_size": 63488 00:18:46.342 }, 00:18:46.342 { 00:18:46.342 "name": "BaseBdev2", 00:18:46.342 "uuid": "3eaeb4c8-4cb9-5a8e-a77d-5019367aad31", 00:18:46.342 "is_configured": true, 00:18:46.342 "data_offset": 2048, 00:18:46.342 "data_size": 63488 00:18:46.342 }, 00:18:46.342 { 00:18:46.342 "name": "BaseBdev3", 00:18:46.342 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:46.342 "is_configured": true, 00:18:46.342 "data_offset": 2048, 00:18:46.342 "data_size": 63488 00:18:46.342 }, 00:18:46.342 { 00:18:46.342 "name": "BaseBdev4", 00:18:46.342 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:46.342 "is_configured": true, 00:18:46.342 "data_offset": 2048, 00:18:46.342 "data_size": 63488 00:18:46.342 } 00:18:46.342 ] 00:18:46.342 }' 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.342 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:46.342 [2024-11-27 14:18:17.208195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.342 [2024-11-27 14:18:17.262414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:46.342 [2024-11-27 14:18:17.270224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:46.601 [2024-11-27 14:18:17.372737] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:46.601 [2024-11-27 14:18:17.383455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.601 [2024-11-27 14:18:17.383579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:46.601 [2024-11-27 14:18:17.383611] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:46.601 [2024-11-27 14:18:17.411081] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.601 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.601 "name": "raid_bdev1", 00:18:46.601 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:46.601 "strip_size_kb": 0, 00:18:46.601 "state": "online", 00:18:46.601 "raid_level": "raid1", 00:18:46.601 "superblock": true, 00:18:46.601 "num_base_bdevs": 4, 00:18:46.601 "num_base_bdevs_discovered": 3, 00:18:46.601 "num_base_bdevs_operational": 3, 00:18:46.601 "base_bdevs_list": [ 00:18:46.601 { 00:18:46.601 "name": null, 00:18:46.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.601 "is_configured": false, 00:18:46.601 "data_offset": 0, 00:18:46.601 "data_size": 63488 00:18:46.601 }, 00:18:46.601 { 00:18:46.602 "name": "BaseBdev2", 00:18:46.602 "uuid": "3eaeb4c8-4cb9-5a8e-a77d-5019367aad31", 00:18:46.602 "is_configured": true, 00:18:46.602 "data_offset": 2048, 00:18:46.602 "data_size": 63488 00:18:46.602 }, 00:18:46.602 { 00:18:46.602 "name": "BaseBdev3", 00:18:46.602 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:46.602 "is_configured": true, 00:18:46.602 "data_offset": 2048, 00:18:46.602 "data_size": 63488 00:18:46.602 }, 00:18:46.602 { 00:18:46.602 "name": "BaseBdev4", 00:18:46.602 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:46.602 "is_configured": true, 00:18:46.602 "data_offset": 2048, 00:18:46.602 "data_size": 63488 00:18:46.602 } 00:18:46.602 ] 00:18:46.602 }' 00:18:46.602 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.602 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.118 115.00 IOPS, 345.00 MiB/s [2024-11-27T14:18:18.074Z] 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.118 "name": "raid_bdev1", 00:18:47.118 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:47.118 "strip_size_kb": 0, 00:18:47.118 "state": "online", 00:18:47.118 "raid_level": "raid1", 00:18:47.118 "superblock": true, 00:18:47.118 "num_base_bdevs": 4, 00:18:47.118 "num_base_bdevs_discovered": 3, 00:18:47.118 "num_base_bdevs_operational": 3, 00:18:47.118 "base_bdevs_list": [ 00:18:47.118 { 00:18:47.118 "name": null, 00:18:47.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.118 "is_configured": false, 00:18:47.118 "data_offset": 0, 00:18:47.118 "data_size": 63488 00:18:47.118 }, 00:18:47.118 { 00:18:47.118 "name": "BaseBdev2", 00:18:47.118 "uuid": "3eaeb4c8-4cb9-5a8e-a77d-5019367aad31", 00:18:47.118 "is_configured": true, 00:18:47.118 "data_offset": 2048, 00:18:47.118 "data_size": 63488 00:18:47.118 }, 00:18:47.118 { 00:18:47.118 "name": "BaseBdev3", 00:18:47.118 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:47.118 "is_configured": true, 00:18:47.118 "data_offset": 2048, 00:18:47.118 "data_size": 63488 00:18:47.118 }, 00:18:47.118 { 00:18:47.118 "name": "BaseBdev4", 00:18:47.118 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:47.118 "is_configured": true, 00:18:47.118 "data_offset": 2048, 00:18:47.118 "data_size": 63488 00:18:47.118 } 00:18:47.118 ] 00:18:47.118 }' 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.118 [2024-11-27 14:18:17.944355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.118 14:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:47.118 [2024-11-27 14:18:17.993819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:47.118 [2024-11-27 14:18:17.996080] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:47.376 [2024-11-27 14:18:18.132162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:47.376 [2024-11-27 14:18:18.279073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:47.636 [2024-11-27 14:18:18.523785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:47.636 [2024-11-27 14:18:18.524565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:47.895 [2024-11-27 14:18:18.644889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:48.153 141.00 IOPS, 423.00 MiB/s [2024-11-27T14:18:19.109Z] [2024-11-27 14:18:18.898226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:48.153 [2024-11-27 14:18:18.899711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:48.153 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.153 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.153 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.153 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.153 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.153 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.153 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.153 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.153 14:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.153 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.153 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.153 "name": "raid_bdev1", 00:18:48.153 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:48.153 "strip_size_kb": 0, 00:18:48.153 "state": "online", 00:18:48.153 "raid_level": "raid1", 00:18:48.153 "superblock": true, 00:18:48.153 "num_base_bdevs": 4, 00:18:48.153 "num_base_bdevs_discovered": 4, 00:18:48.153 "num_base_bdevs_operational": 4, 00:18:48.153 "process": { 00:18:48.153 "type": "rebuild", 00:18:48.153 "target": "spare", 00:18:48.153 "progress": { 00:18:48.153 "blocks": 14336, 00:18:48.153 "percent": 22 00:18:48.153 } 00:18:48.153 }, 00:18:48.153 "base_bdevs_list": [ 00:18:48.153 { 00:18:48.153 "name": "spare", 00:18:48.153 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:48.153 "is_configured": true, 00:18:48.153 "data_offset": 2048, 00:18:48.153 "data_size": 63488 00:18:48.153 }, 00:18:48.153 { 00:18:48.153 "name": "BaseBdev2", 00:18:48.153 "uuid": "3eaeb4c8-4cb9-5a8e-a77d-5019367aad31", 00:18:48.154 "is_configured": true, 00:18:48.154 "data_offset": 2048, 00:18:48.154 "data_size": 63488 00:18:48.154 }, 00:18:48.154 { 00:18:48.154 "name": "BaseBdev3", 00:18:48.154 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:48.154 "is_configured": true, 00:18:48.154 "data_offset": 2048, 00:18:48.154 "data_size": 63488 00:18:48.154 }, 00:18:48.154 { 00:18:48.154 "name": "BaseBdev4", 00:18:48.154 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:48.154 "is_configured": true, 00:18:48.154 "data_offset": 2048, 00:18:48.154 "data_size": 63488 00:18:48.154 } 00:18:48.154 ] 00:18:48.154 }' 00:18:48.154 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.154 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.154 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.412 [2024-11-27 14:18:19.126013] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:48.412 [2024-11-27 14:18:19.126472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:48.412 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.412 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:48.412 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:48.412 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:48.412 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:48.412 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:48.412 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:48.412 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:48.412 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.412 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.412 [2024-11-27 14:18:19.151845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:48.671 [2024-11-27 14:18:19.556421] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:18:48.671 [2024-11-27 14:18:19.556550] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:18:48.671 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.671 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:48.671 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:48.671 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.671 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.671 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.671 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.671 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.671 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.671 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.672 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.672 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.672 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.672 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.672 "name": "raid_bdev1", 00:18:48.672 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:48.672 "strip_size_kb": 0, 00:18:48.672 "state": "online", 00:18:48.672 "raid_level": "raid1", 00:18:48.672 "superblock": true, 00:18:48.672 "num_base_bdevs": 4, 00:18:48.672 "num_base_bdevs_discovered": 3, 00:18:48.672 "num_base_bdevs_operational": 3, 00:18:48.672 "process": { 00:18:48.672 "type": "rebuild", 00:18:48.672 "target": "spare", 00:18:48.672 "progress": { 00:18:48.672 "blocks": 18432, 00:18:48.672 "percent": 29 00:18:48.672 } 00:18:48.672 }, 00:18:48.672 "base_bdevs_list": [ 00:18:48.672 { 00:18:48.672 "name": "spare", 00:18:48.672 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:48.672 "is_configured": true, 00:18:48.672 "data_offset": 2048, 00:18:48.672 "data_size": 63488 00:18:48.672 }, 00:18:48.672 { 00:18:48.672 "name": null, 00:18:48.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.672 "is_configured": false, 00:18:48.672 "data_offset": 0, 00:18:48.672 "data_size": 63488 00:18:48.672 }, 00:18:48.672 { 00:18:48.672 "name": "BaseBdev3", 00:18:48.672 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:48.672 "is_configured": true, 00:18:48.672 "data_offset": 2048, 00:18:48.672 "data_size": 63488 00:18:48.672 }, 00:18:48.672 { 00:18:48.672 "name": "BaseBdev4", 00:18:48.672 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:48.672 "is_configured": true, 00:18:48.672 "data_offset": 2048, 00:18:48.672 "data_size": 63488 00:18:48.672 } 00:18:48.672 ] 00:18:48.672 }' 00:18:48.672 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.930 [2024-11-27 14:18:19.697097] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=507 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.930 116.50 IOPS, 349.50 MiB/s [2024-11-27T14:18:19.886Z] 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.930 "name": "raid_bdev1", 00:18:48.930 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:48.930 "strip_size_kb": 0, 00:18:48.930 "state": "online", 00:18:48.930 "raid_level": "raid1", 00:18:48.930 "superblock": true, 00:18:48.930 "num_base_bdevs": 4, 00:18:48.930 "num_base_bdevs_discovered": 3, 00:18:48.930 "num_base_bdevs_operational": 3, 00:18:48.930 "process": { 00:18:48.930 "type": "rebuild", 00:18:48.930 "target": "spare", 00:18:48.930 "progress": { 00:18:48.930 "blocks": 20480, 00:18:48.930 "percent": 32 00:18:48.930 } 00:18:48.930 }, 00:18:48.930 "base_bdevs_list": [ 00:18:48.930 { 00:18:48.930 "name": "spare", 00:18:48.930 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:48.930 "is_configured": true, 00:18:48.930 "data_offset": 2048, 00:18:48.930 "data_size": 63488 00:18:48.930 }, 00:18:48.930 { 00:18:48.930 "name": null, 00:18:48.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.930 "is_configured": false, 00:18:48.930 "data_offset": 0, 00:18:48.930 "data_size": 63488 00:18:48.930 }, 00:18:48.930 { 00:18:48.930 "name": "BaseBdev3", 00:18:48.930 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:48.930 "is_configured": true, 00:18:48.930 "data_offset": 2048, 00:18:48.930 "data_size": 63488 00:18:48.930 }, 00:18:48.930 { 00:18:48.930 "name": "BaseBdev4", 00:18:48.930 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:48.930 "is_configured": true, 00:18:48.930 "data_offset": 2048, 00:18:48.930 "data_size": 63488 00:18:48.930 } 00:18:48.930 ] 00:18:48.930 }' 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.930 14:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:49.189 [2024-11-27 14:18:19.907666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:49.189 [2024-11-27 14:18:19.908341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:49.758 [2024-11-27 14:18:20.592175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:50.018 102.60 IOPS, 307.80 MiB/s [2024-11-27T14:18:20.974Z] 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:50.018 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.018 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.018 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.018 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.018 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.018 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.018 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.018 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.018 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.018 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.018 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.018 "name": "raid_bdev1", 00:18:50.018 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:50.018 "strip_size_kb": 0, 00:18:50.018 "state": "online", 00:18:50.018 "raid_level": "raid1", 00:18:50.018 "superblock": true, 00:18:50.018 "num_base_bdevs": 4, 00:18:50.018 "num_base_bdevs_discovered": 3, 00:18:50.018 "num_base_bdevs_operational": 3, 00:18:50.018 "process": { 00:18:50.018 "type": "rebuild", 00:18:50.018 "target": "spare", 00:18:50.018 "progress": { 00:18:50.018 "blocks": 36864, 00:18:50.018 "percent": 58 00:18:50.018 } 00:18:50.018 }, 00:18:50.018 "base_bdevs_list": [ 00:18:50.018 { 00:18:50.018 "name": "spare", 00:18:50.018 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:50.018 "is_configured": true, 00:18:50.018 "data_offset": 2048, 00:18:50.018 "data_size": 63488 00:18:50.018 }, 00:18:50.018 { 00:18:50.018 "name": null, 00:18:50.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.018 "is_configured": false, 00:18:50.018 "data_offset": 0, 00:18:50.018 "data_size": 63488 00:18:50.018 }, 00:18:50.018 { 00:18:50.018 "name": "BaseBdev3", 00:18:50.018 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:50.018 "is_configured": true, 00:18:50.018 "data_offset": 2048, 00:18:50.018 "data_size": 63488 00:18:50.018 }, 00:18:50.018 { 00:18:50.018 "name": "BaseBdev4", 00:18:50.019 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:50.019 "is_configured": true, 00:18:50.019 "data_offset": 2048, 00:18:50.019 "data_size": 63488 00:18:50.019 } 00:18:50.019 ] 00:18:50.019 }' 00:18:50.019 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.019 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.019 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.276 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.276 14:18:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:50.534 [2024-11-27 14:18:21.299596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:50.794 [2024-11-27 14:18:21.648345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:18:51.053 90.67 IOPS, 272.00 MiB/s [2024-11-27T14:18:22.009Z] [2024-11-27 14:18:21.871998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:51.053 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.053 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.053 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.053 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.053 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.053 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.053 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.053 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.053 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.053 14:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.313 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.313 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.313 "name": "raid_bdev1", 00:18:51.313 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:51.313 "strip_size_kb": 0, 00:18:51.313 "state": "online", 00:18:51.313 "raid_level": "raid1", 00:18:51.313 "superblock": true, 00:18:51.313 "num_base_bdevs": 4, 00:18:51.313 "num_base_bdevs_discovered": 3, 00:18:51.313 "num_base_bdevs_operational": 3, 00:18:51.313 "process": { 00:18:51.313 "type": "rebuild", 00:18:51.313 "target": "spare", 00:18:51.313 "progress": { 00:18:51.313 "blocks": 53248, 00:18:51.313 "percent": 83 00:18:51.313 } 00:18:51.313 }, 00:18:51.313 "base_bdevs_list": [ 00:18:51.313 { 00:18:51.313 "name": "spare", 00:18:51.313 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:51.313 "is_configured": true, 00:18:51.313 "data_offset": 2048, 00:18:51.313 "data_size": 63488 00:18:51.313 }, 00:18:51.313 { 00:18:51.313 "name": null, 00:18:51.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.313 "is_configured": false, 00:18:51.313 "data_offset": 0, 00:18:51.313 "data_size": 63488 00:18:51.313 }, 00:18:51.313 { 00:18:51.313 "name": "BaseBdev3", 00:18:51.313 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:51.313 "is_configured": true, 00:18:51.313 "data_offset": 2048, 00:18:51.313 "data_size": 63488 00:18:51.313 }, 00:18:51.313 { 00:18:51.313 "name": "BaseBdev4", 00:18:51.313 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:51.313 "is_configured": true, 00:18:51.313 "data_offset": 2048, 00:18:51.313 "data_size": 63488 00:18:51.313 } 00:18:51.313 ] 00:18:51.313 }' 00:18:51.313 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.313 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.313 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.313 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.313 14:18:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:51.313 [2024-11-27 14:18:22.200495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:18:51.572 [2024-11-27 14:18:22.320210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:18:51.830 [2024-11-27 14:18:22.656717] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:51.830 81.57 IOPS, 244.71 MiB/s [2024-11-27T14:18:22.786Z] [2024-11-27 14:18:22.756556] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:51.830 [2024-11-27 14:18:22.766423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.397 "name": "raid_bdev1", 00:18:52.397 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:52.397 "strip_size_kb": 0, 00:18:52.397 "state": "online", 00:18:52.397 "raid_level": "raid1", 00:18:52.397 "superblock": true, 00:18:52.397 "num_base_bdevs": 4, 00:18:52.397 "num_base_bdevs_discovered": 3, 00:18:52.397 "num_base_bdevs_operational": 3, 00:18:52.397 "base_bdevs_list": [ 00:18:52.397 { 00:18:52.397 "name": "spare", 00:18:52.397 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:52.397 "is_configured": true, 00:18:52.397 "data_offset": 2048, 00:18:52.397 "data_size": 63488 00:18:52.397 }, 00:18:52.397 { 00:18:52.397 "name": null, 00:18:52.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.397 "is_configured": false, 00:18:52.397 "data_offset": 0, 00:18:52.397 "data_size": 63488 00:18:52.397 }, 00:18:52.397 { 00:18:52.397 "name": "BaseBdev3", 00:18:52.397 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:52.397 "is_configured": true, 00:18:52.397 "data_offset": 2048, 00:18:52.397 "data_size": 63488 00:18:52.397 }, 00:18:52.397 { 00:18:52.397 "name": "BaseBdev4", 00:18:52.397 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:52.397 "is_configured": true, 00:18:52.397 "data_offset": 2048, 00:18:52.397 "data_size": 63488 00:18:52.397 } 00:18:52.397 ] 00:18:52.397 }' 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:52.397 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.398 "name": "raid_bdev1", 00:18:52.398 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:52.398 "strip_size_kb": 0, 00:18:52.398 "state": "online", 00:18:52.398 "raid_level": "raid1", 00:18:52.398 "superblock": true, 00:18:52.398 "num_base_bdevs": 4, 00:18:52.398 "num_base_bdevs_discovered": 3, 00:18:52.398 "num_base_bdevs_operational": 3, 00:18:52.398 "base_bdevs_list": [ 00:18:52.398 { 00:18:52.398 "name": "spare", 00:18:52.398 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:52.398 "is_configured": true, 00:18:52.398 "data_offset": 2048, 00:18:52.398 "data_size": 63488 00:18:52.398 }, 00:18:52.398 { 00:18:52.398 "name": null, 00:18:52.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.398 "is_configured": false, 00:18:52.398 "data_offset": 0, 00:18:52.398 "data_size": 63488 00:18:52.398 }, 00:18:52.398 { 00:18:52.398 "name": "BaseBdev3", 00:18:52.398 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:52.398 "is_configured": true, 00:18:52.398 "data_offset": 2048, 00:18:52.398 "data_size": 63488 00:18:52.398 }, 00:18:52.398 { 00:18:52.398 "name": "BaseBdev4", 00:18:52.398 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:52.398 "is_configured": true, 00:18:52.398 "data_offset": 2048, 00:18:52.398 "data_size": 63488 00:18:52.398 } 00:18:52.398 ] 00:18:52.398 }' 00:18:52.398 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.656 "name": "raid_bdev1", 00:18:52.656 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:52.656 "strip_size_kb": 0, 00:18:52.656 "state": "online", 00:18:52.656 "raid_level": "raid1", 00:18:52.656 "superblock": true, 00:18:52.656 "num_base_bdevs": 4, 00:18:52.656 "num_base_bdevs_discovered": 3, 00:18:52.656 "num_base_bdevs_operational": 3, 00:18:52.656 "base_bdevs_list": [ 00:18:52.656 { 00:18:52.656 "name": "spare", 00:18:52.656 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:52.656 "is_configured": true, 00:18:52.656 "data_offset": 2048, 00:18:52.656 "data_size": 63488 00:18:52.656 }, 00:18:52.656 { 00:18:52.656 "name": null, 00:18:52.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.656 "is_configured": false, 00:18:52.656 "data_offset": 0, 00:18:52.656 "data_size": 63488 00:18:52.656 }, 00:18:52.656 { 00:18:52.656 "name": "BaseBdev3", 00:18:52.656 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:52.656 "is_configured": true, 00:18:52.656 "data_offset": 2048, 00:18:52.656 "data_size": 63488 00:18:52.656 }, 00:18:52.656 { 00:18:52.656 "name": "BaseBdev4", 00:18:52.656 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:52.656 "is_configured": true, 00:18:52.656 "data_offset": 2048, 00:18:52.656 "data_size": 63488 00:18:52.656 } 00:18:52.656 ] 00:18:52.656 }' 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.656 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.916 74.75 IOPS, 224.25 MiB/s [2024-11-27T14:18:23.872Z] 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:52.916 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.916 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.916 [2024-11-27 14:18:23.790529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.916 [2024-11-27 14:18:23.790648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:53.175 00:18:53.175 Latency(us) 00:18:53.175 [2024-11-27T14:18:24.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.175 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:53.175 raid_bdev1 : 8.17 73.81 221.42 0.00 0.00 18343.90 389.92 116762.83 00:18:53.175 [2024-11-27T14:18:24.131Z] =================================================================================================================== 00:18:53.175 [2024-11-27T14:18:24.131Z] Total : 73.81 221.42 0.00 0.00 18343.90 389.92 116762.83 00:18:53.175 [2024-11-27 14:18:23.916495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.175 { 00:18:53.175 "results": [ 00:18:53.175 { 00:18:53.175 "job": "raid_bdev1", 00:18:53.175 "core_mask": "0x1", 00:18:53.175 "workload": "randrw", 00:18:53.175 "percentage": 50, 00:18:53.175 "status": "finished", 00:18:53.175 "queue_depth": 2, 00:18:53.175 "io_size": 3145728, 00:18:53.175 "runtime": 8.170096, 00:18:53.175 "iops": 73.80574230706713, 00:18:53.175 "mibps": 221.4172269212014, 00:18:53.175 "io_failed": 0, 00:18:53.175 "io_timeout": 0, 00:18:53.175 "avg_latency_us": 18343.90397937532, 00:18:53.175 "min_latency_us": 389.92489082969433, 00:18:53.175 "max_latency_us": 116762.82969432314 00:18:53.175 } 00:18:53.175 ], 00:18:53.175 "core_count": 1 00:18:53.175 } 00:18:53.175 [2024-11-27 14:18:23.916668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.175 [2024-11-27 14:18:23.916802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.175 [2024-11-27 14:18:23.916816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:53.175 14:18:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:53.435 /dev/nbd0 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:53.435 1+0 records in 00:18:53.435 1+0 records out 00:18:53.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340973 s, 12.0 MB/s 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:53.435 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:53.696 /dev/nbd1 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:53.696 1+0 records in 00:18:53.696 1+0 records out 00:18:53.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380282 s, 10.8 MB/s 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:53.696 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:53.956 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:53.956 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:53.956 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:53.956 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:53.956 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:53.956 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:53.956 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:54.216 14:18:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:54.475 /dev/nbd1 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:54.475 1+0 records in 00:18:54.475 1+0 records out 00:18:54.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247574 s, 16.5 MB/s 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:54.475 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:54.733 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:54.991 [2024-11-27 14:18:25.882484] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:54.991 [2024-11-27 14:18:25.882947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.991 [2024-11-27 14:18:25.883132] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:54.991 [2024-11-27 14:18:25.883266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.991 [2024-11-27 14:18:25.885924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.991 [2024-11-27 14:18:25.886100] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:54.991 [2024-11-27 14:18:25.886348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:54.991 [2024-11-27 14:18:25.886462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.991 [2024-11-27 14:18:25.886697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:54.991 [2024-11-27 14:18:25.886885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:54.991 spare 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.991 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:55.249 [2024-11-27 14:18:25.986847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:55.249 [2024-11-27 14:18:25.986883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:55.249 [2024-11-27 14:18:25.987314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:18:55.249 [2024-11-27 14:18:25.987531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:55.249 [2024-11-27 14:18:25.987553] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:55.249 [2024-11-27 14:18:25.987769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.249 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.249 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:55.249 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.249 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.249 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.249 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.249 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:55.249 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.249 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.249 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.249 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.250 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.250 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.250 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.250 14:18:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:55.250 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.250 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.250 "name": "raid_bdev1", 00:18:55.250 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:55.250 "strip_size_kb": 0, 00:18:55.250 "state": "online", 00:18:55.250 "raid_level": "raid1", 00:18:55.250 "superblock": true, 00:18:55.250 "num_base_bdevs": 4, 00:18:55.250 "num_base_bdevs_discovered": 3, 00:18:55.250 "num_base_bdevs_operational": 3, 00:18:55.250 "base_bdevs_list": [ 00:18:55.250 { 00:18:55.250 "name": "spare", 00:18:55.250 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:55.250 "is_configured": true, 00:18:55.250 "data_offset": 2048, 00:18:55.250 "data_size": 63488 00:18:55.250 }, 00:18:55.250 { 00:18:55.250 "name": null, 00:18:55.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.250 "is_configured": false, 00:18:55.250 "data_offset": 2048, 00:18:55.250 "data_size": 63488 00:18:55.250 }, 00:18:55.250 { 00:18:55.250 "name": "BaseBdev3", 00:18:55.250 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:55.250 "is_configured": true, 00:18:55.250 "data_offset": 2048, 00:18:55.250 "data_size": 63488 00:18:55.250 }, 00:18:55.250 { 00:18:55.250 "name": "BaseBdev4", 00:18:55.250 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:55.250 "is_configured": true, 00:18:55.250 "data_offset": 2048, 00:18:55.250 "data_size": 63488 00:18:55.250 } 00:18:55.250 ] 00:18:55.250 }' 00:18:55.250 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.250 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:55.507 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.507 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.507 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.507 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.507 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.507 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.507 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.507 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.508 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:55.508 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.767 "name": "raid_bdev1", 00:18:55.767 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:55.767 "strip_size_kb": 0, 00:18:55.767 "state": "online", 00:18:55.767 "raid_level": "raid1", 00:18:55.767 "superblock": true, 00:18:55.767 "num_base_bdevs": 4, 00:18:55.767 "num_base_bdevs_discovered": 3, 00:18:55.767 "num_base_bdevs_operational": 3, 00:18:55.767 "base_bdevs_list": [ 00:18:55.767 { 00:18:55.767 "name": "spare", 00:18:55.767 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:55.767 "is_configured": true, 00:18:55.767 "data_offset": 2048, 00:18:55.767 "data_size": 63488 00:18:55.767 }, 00:18:55.767 { 00:18:55.767 "name": null, 00:18:55.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.767 "is_configured": false, 00:18:55.767 "data_offset": 2048, 00:18:55.767 "data_size": 63488 00:18:55.767 }, 00:18:55.767 { 00:18:55.767 "name": "BaseBdev3", 00:18:55.767 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:55.767 "is_configured": true, 00:18:55.767 "data_offset": 2048, 00:18:55.767 "data_size": 63488 00:18:55.767 }, 00:18:55.767 { 00:18:55.767 "name": "BaseBdev4", 00:18:55.767 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:55.767 "is_configured": true, 00:18:55.767 "data_offset": 2048, 00:18:55.767 "data_size": 63488 00:18:55.767 } 00:18:55.767 ] 00:18:55.767 }' 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:55.767 [2024-11-27 14:18:26.609899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.767 "name": "raid_bdev1", 00:18:55.767 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:55.767 "strip_size_kb": 0, 00:18:55.767 "state": "online", 00:18:55.767 "raid_level": "raid1", 00:18:55.767 "superblock": true, 00:18:55.767 "num_base_bdevs": 4, 00:18:55.767 "num_base_bdevs_discovered": 2, 00:18:55.767 "num_base_bdevs_operational": 2, 00:18:55.767 "base_bdevs_list": [ 00:18:55.767 { 00:18:55.767 "name": null, 00:18:55.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.767 "is_configured": false, 00:18:55.767 "data_offset": 0, 00:18:55.767 "data_size": 63488 00:18:55.767 }, 00:18:55.767 { 00:18:55.767 "name": null, 00:18:55.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.767 "is_configured": false, 00:18:55.767 "data_offset": 2048, 00:18:55.767 "data_size": 63488 00:18:55.767 }, 00:18:55.767 { 00:18:55.767 "name": "BaseBdev3", 00:18:55.767 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:55.767 "is_configured": true, 00:18:55.767 "data_offset": 2048, 00:18:55.767 "data_size": 63488 00:18:55.767 }, 00:18:55.767 { 00:18:55.767 "name": "BaseBdev4", 00:18:55.767 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:55.767 "is_configured": true, 00:18:55.767 "data_offset": 2048, 00:18:55.767 "data_size": 63488 00:18:55.767 } 00:18:55.767 ] 00:18:55.767 }' 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.767 14:18:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:56.359 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:56.359 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.359 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:56.359 [2024-11-27 14:18:27.045279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.359 [2024-11-27 14:18:27.045547] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:56.359 [2024-11-27 14:18:27.045613] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:56.359 [2024-11-27 14:18:27.046038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:56.359 [2024-11-27 14:18:27.062603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:18:56.359 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.359 14:18:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:56.359 [2024-11-27 14:18:27.064827] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.292 "name": "raid_bdev1", 00:18:57.292 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:57.292 "strip_size_kb": 0, 00:18:57.292 "state": "online", 00:18:57.292 "raid_level": "raid1", 00:18:57.292 "superblock": true, 00:18:57.292 "num_base_bdevs": 4, 00:18:57.292 "num_base_bdevs_discovered": 3, 00:18:57.292 "num_base_bdevs_operational": 3, 00:18:57.292 "process": { 00:18:57.292 "type": "rebuild", 00:18:57.292 "target": "spare", 00:18:57.292 "progress": { 00:18:57.292 "blocks": 20480, 00:18:57.292 "percent": 32 00:18:57.292 } 00:18:57.292 }, 00:18:57.292 "base_bdevs_list": [ 00:18:57.292 { 00:18:57.292 "name": "spare", 00:18:57.292 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:57.292 "is_configured": true, 00:18:57.292 "data_offset": 2048, 00:18:57.292 "data_size": 63488 00:18:57.292 }, 00:18:57.292 { 00:18:57.292 "name": null, 00:18:57.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.292 "is_configured": false, 00:18:57.292 "data_offset": 2048, 00:18:57.292 "data_size": 63488 00:18:57.292 }, 00:18:57.292 { 00:18:57.292 "name": "BaseBdev3", 00:18:57.292 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:57.292 "is_configured": true, 00:18:57.292 "data_offset": 2048, 00:18:57.292 "data_size": 63488 00:18:57.292 }, 00:18:57.292 { 00:18:57.292 "name": "BaseBdev4", 00:18:57.292 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:57.292 "is_configured": true, 00:18:57.292 "data_offset": 2048, 00:18:57.292 "data_size": 63488 00:18:57.292 } 00:18:57.292 ] 00:18:57.292 }' 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.292 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:57.292 [2024-11-27 14:18:28.192763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.550 [2024-11-27 14:18:28.271054] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:57.550 [2024-11-27 14:18:28.271626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.550 [2024-11-27 14:18:28.271664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:57.550 [2024-11-27 14:18:28.271674] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.550 "name": "raid_bdev1", 00:18:57.550 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:57.550 "strip_size_kb": 0, 00:18:57.550 "state": "online", 00:18:57.550 "raid_level": "raid1", 00:18:57.550 "superblock": true, 00:18:57.550 "num_base_bdevs": 4, 00:18:57.550 "num_base_bdevs_discovered": 2, 00:18:57.550 "num_base_bdevs_operational": 2, 00:18:57.550 "base_bdevs_list": [ 00:18:57.550 { 00:18:57.550 "name": null, 00:18:57.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.550 "is_configured": false, 00:18:57.550 "data_offset": 0, 00:18:57.550 "data_size": 63488 00:18:57.550 }, 00:18:57.550 { 00:18:57.550 "name": null, 00:18:57.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.550 "is_configured": false, 00:18:57.550 "data_offset": 2048, 00:18:57.550 "data_size": 63488 00:18:57.550 }, 00:18:57.550 { 00:18:57.550 "name": "BaseBdev3", 00:18:57.550 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:57.550 "is_configured": true, 00:18:57.550 "data_offset": 2048, 00:18:57.550 "data_size": 63488 00:18:57.550 }, 00:18:57.550 { 00:18:57.550 "name": "BaseBdev4", 00:18:57.550 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:57.550 "is_configured": true, 00:18:57.550 "data_offset": 2048, 00:18:57.550 "data_size": 63488 00:18:57.550 } 00:18:57.550 ] 00:18:57.550 }' 00:18:57.550 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.551 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:57.808 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:57.808 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.808 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:57.808 [2024-11-27 14:18:28.738332] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:57.808 [2024-11-27 14:18:28.738655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:57.808 [2024-11-27 14:18:28.738788] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:57.808 [2024-11-27 14:18:28.738882] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:57.808 [2024-11-27 14:18:28.739536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:57.808 [2024-11-27 14:18:28.739681] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:57.808 [2024-11-27 14:18:28.739885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:57.808 [2024-11-27 14:18:28.739956] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:57.808 [2024-11-27 14:18:28.740004] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:57.808 [2024-11-27 14:18:28.740106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:57.808 [2024-11-27 14:18:28.756554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:18:57.808 spare 00:18:57.808 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.808 14:18:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:57.808 [2024-11-27 14:18:28.758620] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:59.187 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.187 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.187 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.187 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.187 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.187 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.187 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.187 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.187 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:59.187 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.187 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.187 "name": "raid_bdev1", 00:18:59.187 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:59.187 "strip_size_kb": 0, 00:18:59.187 "state": "online", 00:18:59.187 "raid_level": "raid1", 00:18:59.187 "superblock": true, 00:18:59.187 "num_base_bdevs": 4, 00:18:59.187 "num_base_bdevs_discovered": 3, 00:18:59.187 "num_base_bdevs_operational": 3, 00:18:59.187 "process": { 00:18:59.187 "type": "rebuild", 00:18:59.187 "target": "spare", 00:18:59.187 "progress": { 00:18:59.187 "blocks": 20480, 00:18:59.187 "percent": 32 00:18:59.187 } 00:18:59.187 }, 00:18:59.187 "base_bdevs_list": [ 00:18:59.187 { 00:18:59.187 "name": "spare", 00:18:59.187 "uuid": "4366c410-3040-5380-9e5c-daabc5b4907c", 00:18:59.187 "is_configured": true, 00:18:59.187 "data_offset": 2048, 00:18:59.187 "data_size": 63488 00:18:59.187 }, 00:18:59.187 { 00:18:59.187 "name": null, 00:18:59.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.187 "is_configured": false, 00:18:59.187 "data_offset": 2048, 00:18:59.187 "data_size": 63488 00:18:59.187 }, 00:18:59.187 { 00:18:59.187 "name": "BaseBdev3", 00:18:59.187 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:59.187 "is_configured": true, 00:18:59.187 "data_offset": 2048, 00:18:59.187 "data_size": 63488 00:18:59.188 }, 00:18:59.188 { 00:18:59.188 "name": "BaseBdev4", 00:18:59.188 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:59.188 "is_configured": true, 00:18:59.188 "data_offset": 2048, 00:18:59.188 "data_size": 63488 00:18:59.188 } 00:18:59.188 ] 00:18:59.188 }' 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:59.188 [2024-11-27 14:18:29.918432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.188 [2024-11-27 14:18:29.964736] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:59.188 [2024-11-27 14:18:29.965329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.188 [2024-11-27 14:18:29.965358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:59.188 [2024-11-27 14:18:29.965372] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.188 14:18:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.188 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.188 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.188 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.188 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:59.188 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.188 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.188 "name": "raid_bdev1", 00:18:59.188 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:59.188 "strip_size_kb": 0, 00:18:59.188 "state": "online", 00:18:59.188 "raid_level": "raid1", 00:18:59.188 "superblock": true, 00:18:59.188 "num_base_bdevs": 4, 00:18:59.188 "num_base_bdevs_discovered": 2, 00:18:59.188 "num_base_bdevs_operational": 2, 00:18:59.188 "base_bdevs_list": [ 00:18:59.188 { 00:18:59.188 "name": null, 00:18:59.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.188 "is_configured": false, 00:18:59.188 "data_offset": 0, 00:18:59.188 "data_size": 63488 00:18:59.188 }, 00:18:59.188 { 00:18:59.188 "name": null, 00:18:59.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.188 "is_configured": false, 00:18:59.188 "data_offset": 2048, 00:18:59.188 "data_size": 63488 00:18:59.188 }, 00:18:59.188 { 00:18:59.188 "name": "BaseBdev3", 00:18:59.188 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:59.188 "is_configured": true, 00:18:59.188 "data_offset": 2048, 00:18:59.188 "data_size": 63488 00:18:59.188 }, 00:18:59.188 { 00:18:59.188 "name": "BaseBdev4", 00:18:59.188 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:59.188 "is_configured": true, 00:18:59.188 "data_offset": 2048, 00:18:59.188 "data_size": 63488 00:18:59.188 } 00:18:59.188 ] 00:18:59.188 }' 00:18:59.188 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.188 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.755 "name": "raid_bdev1", 00:18:59.755 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:18:59.755 "strip_size_kb": 0, 00:18:59.755 "state": "online", 00:18:59.755 "raid_level": "raid1", 00:18:59.755 "superblock": true, 00:18:59.755 "num_base_bdevs": 4, 00:18:59.755 "num_base_bdevs_discovered": 2, 00:18:59.755 "num_base_bdevs_operational": 2, 00:18:59.755 "base_bdevs_list": [ 00:18:59.755 { 00:18:59.755 "name": null, 00:18:59.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.755 "is_configured": false, 00:18:59.755 "data_offset": 0, 00:18:59.755 "data_size": 63488 00:18:59.755 }, 00:18:59.755 { 00:18:59.755 "name": null, 00:18:59.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.755 "is_configured": false, 00:18:59.755 "data_offset": 2048, 00:18:59.755 "data_size": 63488 00:18:59.755 }, 00:18:59.755 { 00:18:59.755 "name": "BaseBdev3", 00:18:59.755 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:18:59.755 "is_configured": true, 00:18:59.755 "data_offset": 2048, 00:18:59.755 "data_size": 63488 00:18:59.755 }, 00:18:59.755 { 00:18:59.755 "name": "BaseBdev4", 00:18:59.755 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:18:59.755 "is_configured": true, 00:18:59.755 "data_offset": 2048, 00:18:59.755 "data_size": 63488 00:18:59.755 } 00:18:59.755 ] 00:18:59.755 }' 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:59.755 [2024-11-27 14:18:30.590511] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:59.755 [2024-11-27 14:18:30.590809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.755 [2024-11-27 14:18:30.590906] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:18:59.755 [2024-11-27 14:18:30.591002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.755 [2024-11-27 14:18:30.591615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.755 [2024-11-27 14:18:30.591768] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:59.755 [2024-11-27 14:18:30.591986] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:59.755 [2024-11-27 14:18:30.592048] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:59.755 [2024-11-27 14:18:30.592093] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:59.755 [2024-11-27 14:18:30.592163] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:59.755 BaseBdev1 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.755 14:18:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:00.689 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:00.689 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.689 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.689 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.689 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.689 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.690 "name": "raid_bdev1", 00:19:00.690 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:19:00.690 "strip_size_kb": 0, 00:19:00.690 "state": "online", 00:19:00.690 "raid_level": "raid1", 00:19:00.690 "superblock": true, 00:19:00.690 "num_base_bdevs": 4, 00:19:00.690 "num_base_bdevs_discovered": 2, 00:19:00.690 "num_base_bdevs_operational": 2, 00:19:00.690 "base_bdevs_list": [ 00:19:00.690 { 00:19:00.690 "name": null, 00:19:00.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.690 "is_configured": false, 00:19:00.690 "data_offset": 0, 00:19:00.690 "data_size": 63488 00:19:00.690 }, 00:19:00.690 { 00:19:00.690 "name": null, 00:19:00.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.690 "is_configured": false, 00:19:00.690 "data_offset": 2048, 00:19:00.690 "data_size": 63488 00:19:00.690 }, 00:19:00.690 { 00:19:00.690 "name": "BaseBdev3", 00:19:00.690 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:19:00.690 "is_configured": true, 00:19:00.690 "data_offset": 2048, 00:19:00.690 "data_size": 63488 00:19:00.690 }, 00:19:00.690 { 00:19:00.690 "name": "BaseBdev4", 00:19:00.690 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:19:00.690 "is_configured": true, 00:19:00.690 "data_offset": 2048, 00:19:00.690 "data_size": 63488 00:19:00.690 } 00:19:00.690 ] 00:19:00.690 }' 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.690 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.257 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.257 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.257 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.257 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.257 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.257 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.257 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.257 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.257 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.257 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.257 14:18:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.257 "name": "raid_bdev1", 00:19:01.257 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:19:01.257 "strip_size_kb": 0, 00:19:01.257 "state": "online", 00:19:01.257 "raid_level": "raid1", 00:19:01.257 "superblock": true, 00:19:01.257 "num_base_bdevs": 4, 00:19:01.257 "num_base_bdevs_discovered": 2, 00:19:01.257 "num_base_bdevs_operational": 2, 00:19:01.257 "base_bdevs_list": [ 00:19:01.257 { 00:19:01.257 "name": null, 00:19:01.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.257 "is_configured": false, 00:19:01.257 "data_offset": 0, 00:19:01.257 "data_size": 63488 00:19:01.257 }, 00:19:01.257 { 00:19:01.257 "name": null, 00:19:01.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.257 "is_configured": false, 00:19:01.257 "data_offset": 2048, 00:19:01.257 "data_size": 63488 00:19:01.257 }, 00:19:01.257 { 00:19:01.257 "name": "BaseBdev3", 00:19:01.257 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:19:01.257 "is_configured": true, 00:19:01.257 "data_offset": 2048, 00:19:01.257 "data_size": 63488 00:19:01.257 }, 00:19:01.257 { 00:19:01.257 "name": "BaseBdev4", 00:19:01.257 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:19:01.257 "is_configured": true, 00:19:01.257 "data_offset": 2048, 00:19:01.257 "data_size": 63488 00:19:01.257 } 00:19:01.257 ] 00:19:01.257 }' 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.257 [2024-11-27 14:18:32.096276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.257 [2024-11-27 14:18:32.096514] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:01.257 [2024-11-27 14:18:32.096534] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:01.257 request: 00:19:01.257 { 00:19:01.257 "base_bdev": "BaseBdev1", 00:19:01.257 "raid_bdev": "raid_bdev1", 00:19:01.257 "method": "bdev_raid_add_base_bdev", 00:19:01.257 "req_id": 1 00:19:01.257 } 00:19:01.257 Got JSON-RPC error response 00:19:01.257 response: 00:19:01.257 { 00:19:01.257 "code": -22, 00:19:01.257 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:01.257 } 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:01.257 14:18:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:02.195 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:02.195 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.195 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.195 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.195 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.195 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.195 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.195 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.195 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.195 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.196 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.196 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.196 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.196 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:02.196 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.455 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.455 "name": "raid_bdev1", 00:19:02.455 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:19:02.455 "strip_size_kb": 0, 00:19:02.455 "state": "online", 00:19:02.455 "raid_level": "raid1", 00:19:02.455 "superblock": true, 00:19:02.455 "num_base_bdevs": 4, 00:19:02.455 "num_base_bdevs_discovered": 2, 00:19:02.455 "num_base_bdevs_operational": 2, 00:19:02.455 "base_bdevs_list": [ 00:19:02.455 { 00:19:02.455 "name": null, 00:19:02.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.455 "is_configured": false, 00:19:02.455 "data_offset": 0, 00:19:02.455 "data_size": 63488 00:19:02.455 }, 00:19:02.455 { 00:19:02.455 "name": null, 00:19:02.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.455 "is_configured": false, 00:19:02.455 "data_offset": 2048, 00:19:02.455 "data_size": 63488 00:19:02.455 }, 00:19:02.455 { 00:19:02.455 "name": "BaseBdev3", 00:19:02.455 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:19:02.455 "is_configured": true, 00:19:02.455 "data_offset": 2048, 00:19:02.455 "data_size": 63488 00:19:02.455 }, 00:19:02.455 { 00:19:02.455 "name": "BaseBdev4", 00:19:02.455 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:19:02.455 "is_configured": true, 00:19:02.455 "data_offset": 2048, 00:19:02.455 "data_size": 63488 00:19:02.455 } 00:19:02.455 ] 00:19:02.455 }' 00:19:02.455 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.455 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:02.714 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:02.714 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.714 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:02.714 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:02.714 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.715 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.715 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.715 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.715 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:02.715 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.715 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.715 "name": "raid_bdev1", 00:19:02.715 "uuid": "d07a74ce-099f-48a5-9bae-5f02514873c9", 00:19:02.715 "strip_size_kb": 0, 00:19:02.715 "state": "online", 00:19:02.715 "raid_level": "raid1", 00:19:02.715 "superblock": true, 00:19:02.715 "num_base_bdevs": 4, 00:19:02.715 "num_base_bdevs_discovered": 2, 00:19:02.715 "num_base_bdevs_operational": 2, 00:19:02.715 "base_bdevs_list": [ 00:19:02.715 { 00:19:02.715 "name": null, 00:19:02.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.715 "is_configured": false, 00:19:02.715 "data_offset": 0, 00:19:02.715 "data_size": 63488 00:19:02.715 }, 00:19:02.715 { 00:19:02.715 "name": null, 00:19:02.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.715 "is_configured": false, 00:19:02.715 "data_offset": 2048, 00:19:02.715 "data_size": 63488 00:19:02.715 }, 00:19:02.715 { 00:19:02.715 "name": "BaseBdev3", 00:19:02.715 "uuid": "13249d67-a89f-52fc-9ec1-505016db984b", 00:19:02.715 "is_configured": true, 00:19:02.715 "data_offset": 2048, 00:19:02.715 "data_size": 63488 00:19:02.715 }, 00:19:02.715 { 00:19:02.715 "name": "BaseBdev4", 00:19:02.715 "uuid": "bdc6335e-cbcf-5cca-bcda-2999a7f033e8", 00:19:02.715 "is_configured": true, 00:19:02.715 "data_offset": 2048, 00:19:02.715 "data_size": 63488 00:19:02.715 } 00:19:02.715 ] 00:19:02.715 }' 00:19:02.715 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.715 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:02.715 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79388 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79388 ']' 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79388 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79388 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.972 killing process with pid 79388 00:19:02.972 Received shutdown signal, test time was about 18.013323 seconds 00:19:02.972 00:19:02.972 Latency(us) 00:19:02.972 [2024-11-27T14:18:33.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.972 [2024-11-27T14:18:33.928Z] =================================================================================================================== 00:19:02.972 [2024-11-27T14:18:33.928Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79388' 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79388 00:19:02.972 [2024-11-27 14:18:33.714524] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:02.972 [2024-11-27 14:18:33.714657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.972 14:18:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79388 00:19:02.972 [2024-11-27 14:18:33.714738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.972 [2024-11-27 14:18:33.714749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:03.539 [2024-11-27 14:18:34.197304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.918 ************************************ 00:19:04.918 END TEST raid_rebuild_test_sb_io 00:19:04.918 ************************************ 00:19:04.918 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:04.918 00:19:04.918 real 0m21.678s 00:19:04.918 user 0m28.133s 00:19:04.918 sys 0m2.433s 00:19:04.918 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.918 14:18:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:04.918 14:18:35 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:19:04.918 14:18:35 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:19:04.918 14:18:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:04.918 14:18:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.918 14:18:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.918 ************************************ 00:19:04.918 START TEST raid5f_state_function_test 00:19:04.918 ************************************ 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:19:04.918 Process raid pid: 80112 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80112 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80112' 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80112 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:04.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80112 ']' 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.918 14:18:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.918 [2024-11-27 14:18:35.681034] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:04.918 [2024-11-27 14:18:35.681260] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.918 [2024-11-27 14:18:35.856602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.178 [2024-11-27 14:18:35.990759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.438 [2024-11-27 14:18:36.230400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.438 [2024-11-27 14:18:36.230438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.697 [2024-11-27 14:18:36.550531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:05.697 [2024-11-27 14:18:36.550651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:05.697 [2024-11-27 14:18:36.550667] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.697 [2024-11-27 14:18:36.550678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.697 [2024-11-27 14:18:36.550686] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:05.697 [2024-11-27 14:18:36.550696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.697 "name": "Existed_Raid", 00:19:05.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.697 "strip_size_kb": 64, 00:19:05.697 "state": "configuring", 00:19:05.697 "raid_level": "raid5f", 00:19:05.697 "superblock": false, 00:19:05.697 "num_base_bdevs": 3, 00:19:05.697 "num_base_bdevs_discovered": 0, 00:19:05.697 "num_base_bdevs_operational": 3, 00:19:05.697 "base_bdevs_list": [ 00:19:05.697 { 00:19:05.697 "name": "BaseBdev1", 00:19:05.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.697 "is_configured": false, 00:19:05.697 "data_offset": 0, 00:19:05.697 "data_size": 0 00:19:05.697 }, 00:19:05.697 { 00:19:05.697 "name": "BaseBdev2", 00:19:05.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.697 "is_configured": false, 00:19:05.697 "data_offset": 0, 00:19:05.697 "data_size": 0 00:19:05.697 }, 00:19:05.697 { 00:19:05.697 "name": "BaseBdev3", 00:19:05.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.697 "is_configured": false, 00:19:05.697 "data_offset": 0, 00:19:05.697 "data_size": 0 00:19:05.697 } 00:19:05.697 ] 00:19:05.697 }' 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.697 14:18:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.262 14:18:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:06.262 14:18:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.262 14:18:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.262 [2024-11-27 14:18:36.997746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:06.262 [2024-11-27 14:18:36.997842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:06.262 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.262 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:06.262 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.262 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.262 [2024-11-27 14:18:37.005771] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:06.262 [2024-11-27 14:18:37.005866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:06.262 [2024-11-27 14:18:37.005900] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.263 [2024-11-27 14:18:37.005928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.263 [2024-11-27 14:18:37.005952] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:06.263 [2024-11-27 14:18:37.005977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.263 [2024-11-27 14:18:37.055818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:06.263 BaseBdev1 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.263 [ 00:19:06.263 { 00:19:06.263 "name": "BaseBdev1", 00:19:06.263 "aliases": [ 00:19:06.263 "8de00207-e79d-4b51-bd9b-2f51e35edeb3" 00:19:06.263 ], 00:19:06.263 "product_name": "Malloc disk", 00:19:06.263 "block_size": 512, 00:19:06.263 "num_blocks": 65536, 00:19:06.263 "uuid": "8de00207-e79d-4b51-bd9b-2f51e35edeb3", 00:19:06.263 "assigned_rate_limits": { 00:19:06.263 "rw_ios_per_sec": 0, 00:19:06.263 "rw_mbytes_per_sec": 0, 00:19:06.263 "r_mbytes_per_sec": 0, 00:19:06.263 "w_mbytes_per_sec": 0 00:19:06.263 }, 00:19:06.263 "claimed": true, 00:19:06.263 "claim_type": "exclusive_write", 00:19:06.263 "zoned": false, 00:19:06.263 "supported_io_types": { 00:19:06.263 "read": true, 00:19:06.263 "write": true, 00:19:06.263 "unmap": true, 00:19:06.263 "flush": true, 00:19:06.263 "reset": true, 00:19:06.263 "nvme_admin": false, 00:19:06.263 "nvme_io": false, 00:19:06.263 "nvme_io_md": false, 00:19:06.263 "write_zeroes": true, 00:19:06.263 "zcopy": true, 00:19:06.263 "get_zone_info": false, 00:19:06.263 "zone_management": false, 00:19:06.263 "zone_append": false, 00:19:06.263 "compare": false, 00:19:06.263 "compare_and_write": false, 00:19:06.263 "abort": true, 00:19:06.263 "seek_hole": false, 00:19:06.263 "seek_data": false, 00:19:06.263 "copy": true, 00:19:06.263 "nvme_iov_md": false 00:19:06.263 }, 00:19:06.263 "memory_domains": [ 00:19:06.263 { 00:19:06.263 "dma_device_id": "system", 00:19:06.263 "dma_device_type": 1 00:19:06.263 }, 00:19:06.263 { 00:19:06.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.263 "dma_device_type": 2 00:19:06.263 } 00:19:06.263 ], 00:19:06.263 "driver_specific": {} 00:19:06.263 } 00:19:06.263 ] 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.263 "name": "Existed_Raid", 00:19:06.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.263 "strip_size_kb": 64, 00:19:06.263 "state": "configuring", 00:19:06.263 "raid_level": "raid5f", 00:19:06.263 "superblock": false, 00:19:06.263 "num_base_bdevs": 3, 00:19:06.263 "num_base_bdevs_discovered": 1, 00:19:06.263 "num_base_bdevs_operational": 3, 00:19:06.263 "base_bdevs_list": [ 00:19:06.263 { 00:19:06.263 "name": "BaseBdev1", 00:19:06.263 "uuid": "8de00207-e79d-4b51-bd9b-2f51e35edeb3", 00:19:06.263 "is_configured": true, 00:19:06.263 "data_offset": 0, 00:19:06.263 "data_size": 65536 00:19:06.263 }, 00:19:06.263 { 00:19:06.263 "name": "BaseBdev2", 00:19:06.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.263 "is_configured": false, 00:19:06.263 "data_offset": 0, 00:19:06.263 "data_size": 0 00:19:06.263 }, 00:19:06.263 { 00:19:06.263 "name": "BaseBdev3", 00:19:06.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.263 "is_configured": false, 00:19:06.263 "data_offset": 0, 00:19:06.263 "data_size": 0 00:19:06.263 } 00:19:06.263 ] 00:19:06.263 }' 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.263 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.832 [2024-11-27 14:18:37.551048] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:06.832 [2024-11-27 14:18:37.551200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.832 [2024-11-27 14:18:37.563095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:06.832 [2024-11-27 14:18:37.565199] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.832 [2024-11-27 14:18:37.565295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.832 [2024-11-27 14:18:37.565311] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:06.832 [2024-11-27 14:18:37.565321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.832 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.832 "name": "Existed_Raid", 00:19:06.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.832 "strip_size_kb": 64, 00:19:06.832 "state": "configuring", 00:19:06.832 "raid_level": "raid5f", 00:19:06.832 "superblock": false, 00:19:06.832 "num_base_bdevs": 3, 00:19:06.832 "num_base_bdevs_discovered": 1, 00:19:06.832 "num_base_bdevs_operational": 3, 00:19:06.832 "base_bdevs_list": [ 00:19:06.832 { 00:19:06.832 "name": "BaseBdev1", 00:19:06.832 "uuid": "8de00207-e79d-4b51-bd9b-2f51e35edeb3", 00:19:06.832 "is_configured": true, 00:19:06.832 "data_offset": 0, 00:19:06.832 "data_size": 65536 00:19:06.832 }, 00:19:06.832 { 00:19:06.833 "name": "BaseBdev2", 00:19:06.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.833 "is_configured": false, 00:19:06.833 "data_offset": 0, 00:19:06.833 "data_size": 0 00:19:06.833 }, 00:19:06.833 { 00:19:06.833 "name": "BaseBdev3", 00:19:06.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.833 "is_configured": false, 00:19:06.833 "data_offset": 0, 00:19:06.833 "data_size": 0 00:19:06.833 } 00:19:06.833 ] 00:19:06.833 }' 00:19:06.833 14:18:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.833 14:18:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.093 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:07.093 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.093 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.353 [2024-11-27 14:18:38.065549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:07.353 BaseBdev2 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.353 [ 00:19:07.353 { 00:19:07.353 "name": "BaseBdev2", 00:19:07.353 "aliases": [ 00:19:07.353 "70f18421-6bc1-4e5a-be05-f5a1fd06734c" 00:19:07.353 ], 00:19:07.353 "product_name": "Malloc disk", 00:19:07.353 "block_size": 512, 00:19:07.353 "num_blocks": 65536, 00:19:07.353 "uuid": "70f18421-6bc1-4e5a-be05-f5a1fd06734c", 00:19:07.353 "assigned_rate_limits": { 00:19:07.353 "rw_ios_per_sec": 0, 00:19:07.353 "rw_mbytes_per_sec": 0, 00:19:07.353 "r_mbytes_per_sec": 0, 00:19:07.353 "w_mbytes_per_sec": 0 00:19:07.353 }, 00:19:07.353 "claimed": true, 00:19:07.353 "claim_type": "exclusive_write", 00:19:07.353 "zoned": false, 00:19:07.353 "supported_io_types": { 00:19:07.353 "read": true, 00:19:07.353 "write": true, 00:19:07.353 "unmap": true, 00:19:07.353 "flush": true, 00:19:07.353 "reset": true, 00:19:07.353 "nvme_admin": false, 00:19:07.353 "nvme_io": false, 00:19:07.353 "nvme_io_md": false, 00:19:07.353 "write_zeroes": true, 00:19:07.353 "zcopy": true, 00:19:07.353 "get_zone_info": false, 00:19:07.353 "zone_management": false, 00:19:07.353 "zone_append": false, 00:19:07.353 "compare": false, 00:19:07.353 "compare_and_write": false, 00:19:07.353 "abort": true, 00:19:07.353 "seek_hole": false, 00:19:07.353 "seek_data": false, 00:19:07.353 "copy": true, 00:19:07.353 "nvme_iov_md": false 00:19:07.353 }, 00:19:07.353 "memory_domains": [ 00:19:07.353 { 00:19:07.353 "dma_device_id": "system", 00:19:07.353 "dma_device_type": 1 00:19:07.353 }, 00:19:07.353 { 00:19:07.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.353 "dma_device_type": 2 00:19:07.353 } 00:19:07.353 ], 00:19:07.353 "driver_specific": {} 00:19:07.353 } 00:19:07.353 ] 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.353 "name": "Existed_Raid", 00:19:07.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.353 "strip_size_kb": 64, 00:19:07.353 "state": "configuring", 00:19:07.353 "raid_level": "raid5f", 00:19:07.353 "superblock": false, 00:19:07.353 "num_base_bdevs": 3, 00:19:07.353 "num_base_bdevs_discovered": 2, 00:19:07.353 "num_base_bdevs_operational": 3, 00:19:07.353 "base_bdevs_list": [ 00:19:07.353 { 00:19:07.353 "name": "BaseBdev1", 00:19:07.353 "uuid": "8de00207-e79d-4b51-bd9b-2f51e35edeb3", 00:19:07.353 "is_configured": true, 00:19:07.353 "data_offset": 0, 00:19:07.353 "data_size": 65536 00:19:07.353 }, 00:19:07.353 { 00:19:07.353 "name": "BaseBdev2", 00:19:07.353 "uuid": "70f18421-6bc1-4e5a-be05-f5a1fd06734c", 00:19:07.353 "is_configured": true, 00:19:07.353 "data_offset": 0, 00:19:07.353 "data_size": 65536 00:19:07.353 }, 00:19:07.353 { 00:19:07.353 "name": "BaseBdev3", 00:19:07.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.353 "is_configured": false, 00:19:07.353 "data_offset": 0, 00:19:07.353 "data_size": 0 00:19:07.353 } 00:19:07.353 ] 00:19:07.353 }' 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.353 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.923 [2024-11-27 14:18:38.656741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:07.923 [2024-11-27 14:18:38.656898] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:07.923 [2024-11-27 14:18:38.656938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:07.923 [2024-11-27 14:18:38.657304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:07.923 [2024-11-27 14:18:38.664084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:07.923 [2024-11-27 14:18:38.664167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:07.923 [2024-11-27 14:18:38.664519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.923 BaseBdev3 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.923 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.923 [ 00:19:07.923 { 00:19:07.923 "name": "BaseBdev3", 00:19:07.923 "aliases": [ 00:19:07.923 "72a3ba69-d866-4d3a-955a-588f9e8f314c" 00:19:07.923 ], 00:19:07.923 "product_name": "Malloc disk", 00:19:07.923 "block_size": 512, 00:19:07.923 "num_blocks": 65536, 00:19:07.923 "uuid": "72a3ba69-d866-4d3a-955a-588f9e8f314c", 00:19:07.923 "assigned_rate_limits": { 00:19:07.923 "rw_ios_per_sec": 0, 00:19:07.923 "rw_mbytes_per_sec": 0, 00:19:07.923 "r_mbytes_per_sec": 0, 00:19:07.923 "w_mbytes_per_sec": 0 00:19:07.923 }, 00:19:07.923 "claimed": true, 00:19:07.923 "claim_type": "exclusive_write", 00:19:07.923 "zoned": false, 00:19:07.923 "supported_io_types": { 00:19:07.923 "read": true, 00:19:07.923 "write": true, 00:19:07.923 "unmap": true, 00:19:07.923 "flush": true, 00:19:07.923 "reset": true, 00:19:07.923 "nvme_admin": false, 00:19:07.923 "nvme_io": false, 00:19:07.923 "nvme_io_md": false, 00:19:07.923 "write_zeroes": true, 00:19:07.924 "zcopy": true, 00:19:07.924 "get_zone_info": false, 00:19:07.924 "zone_management": false, 00:19:07.924 "zone_append": false, 00:19:07.924 "compare": false, 00:19:07.924 "compare_and_write": false, 00:19:07.924 "abort": true, 00:19:07.924 "seek_hole": false, 00:19:07.924 "seek_data": false, 00:19:07.924 "copy": true, 00:19:07.924 "nvme_iov_md": false 00:19:07.924 }, 00:19:07.924 "memory_domains": [ 00:19:07.924 { 00:19:07.924 "dma_device_id": "system", 00:19:07.924 "dma_device_type": 1 00:19:07.924 }, 00:19:07.924 { 00:19:07.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.924 "dma_device_type": 2 00:19:07.924 } 00:19:07.924 ], 00:19:07.924 "driver_specific": {} 00:19:07.924 } 00:19:07.924 ] 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.924 "name": "Existed_Raid", 00:19:07.924 "uuid": "372518d1-c127-4616-bb2a-8d98ee99c85f", 00:19:07.924 "strip_size_kb": 64, 00:19:07.924 "state": "online", 00:19:07.924 "raid_level": "raid5f", 00:19:07.924 "superblock": false, 00:19:07.924 "num_base_bdevs": 3, 00:19:07.924 "num_base_bdevs_discovered": 3, 00:19:07.924 "num_base_bdevs_operational": 3, 00:19:07.924 "base_bdevs_list": [ 00:19:07.924 { 00:19:07.924 "name": "BaseBdev1", 00:19:07.924 "uuid": "8de00207-e79d-4b51-bd9b-2f51e35edeb3", 00:19:07.924 "is_configured": true, 00:19:07.924 "data_offset": 0, 00:19:07.924 "data_size": 65536 00:19:07.924 }, 00:19:07.924 { 00:19:07.924 "name": "BaseBdev2", 00:19:07.924 "uuid": "70f18421-6bc1-4e5a-be05-f5a1fd06734c", 00:19:07.924 "is_configured": true, 00:19:07.924 "data_offset": 0, 00:19:07.924 "data_size": 65536 00:19:07.924 }, 00:19:07.924 { 00:19:07.924 "name": "BaseBdev3", 00:19:07.924 "uuid": "72a3ba69-d866-4d3a-955a-588f9e8f314c", 00:19:07.924 "is_configured": true, 00:19:07.924 "data_offset": 0, 00:19:07.924 "data_size": 65536 00:19:07.924 } 00:19:07.924 ] 00:19:07.924 }' 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.924 14:18:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.184 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:08.184 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:08.184 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:08.184 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:08.184 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:08.184 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:08.184 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:08.184 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:08.184 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.184 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.184 [2024-11-27 14:18:39.123370] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.441 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.441 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:08.441 "name": "Existed_Raid", 00:19:08.441 "aliases": [ 00:19:08.441 "372518d1-c127-4616-bb2a-8d98ee99c85f" 00:19:08.441 ], 00:19:08.441 "product_name": "Raid Volume", 00:19:08.441 "block_size": 512, 00:19:08.441 "num_blocks": 131072, 00:19:08.441 "uuid": "372518d1-c127-4616-bb2a-8d98ee99c85f", 00:19:08.441 "assigned_rate_limits": { 00:19:08.441 "rw_ios_per_sec": 0, 00:19:08.441 "rw_mbytes_per_sec": 0, 00:19:08.441 "r_mbytes_per_sec": 0, 00:19:08.441 "w_mbytes_per_sec": 0 00:19:08.441 }, 00:19:08.441 "claimed": false, 00:19:08.441 "zoned": false, 00:19:08.441 "supported_io_types": { 00:19:08.441 "read": true, 00:19:08.441 "write": true, 00:19:08.441 "unmap": false, 00:19:08.441 "flush": false, 00:19:08.441 "reset": true, 00:19:08.441 "nvme_admin": false, 00:19:08.441 "nvme_io": false, 00:19:08.441 "nvme_io_md": false, 00:19:08.441 "write_zeroes": true, 00:19:08.441 "zcopy": false, 00:19:08.441 "get_zone_info": false, 00:19:08.441 "zone_management": false, 00:19:08.441 "zone_append": false, 00:19:08.441 "compare": false, 00:19:08.441 "compare_and_write": false, 00:19:08.441 "abort": false, 00:19:08.441 "seek_hole": false, 00:19:08.441 "seek_data": false, 00:19:08.441 "copy": false, 00:19:08.441 "nvme_iov_md": false 00:19:08.441 }, 00:19:08.441 "driver_specific": { 00:19:08.441 "raid": { 00:19:08.441 "uuid": "372518d1-c127-4616-bb2a-8d98ee99c85f", 00:19:08.441 "strip_size_kb": 64, 00:19:08.441 "state": "online", 00:19:08.441 "raid_level": "raid5f", 00:19:08.441 "superblock": false, 00:19:08.441 "num_base_bdevs": 3, 00:19:08.441 "num_base_bdevs_discovered": 3, 00:19:08.441 "num_base_bdevs_operational": 3, 00:19:08.441 "base_bdevs_list": [ 00:19:08.441 { 00:19:08.441 "name": "BaseBdev1", 00:19:08.441 "uuid": "8de00207-e79d-4b51-bd9b-2f51e35edeb3", 00:19:08.441 "is_configured": true, 00:19:08.441 "data_offset": 0, 00:19:08.441 "data_size": 65536 00:19:08.441 }, 00:19:08.441 { 00:19:08.441 "name": "BaseBdev2", 00:19:08.441 "uuid": "70f18421-6bc1-4e5a-be05-f5a1fd06734c", 00:19:08.441 "is_configured": true, 00:19:08.441 "data_offset": 0, 00:19:08.441 "data_size": 65536 00:19:08.441 }, 00:19:08.441 { 00:19:08.441 "name": "BaseBdev3", 00:19:08.441 "uuid": "72a3ba69-d866-4d3a-955a-588f9e8f314c", 00:19:08.441 "is_configured": true, 00:19:08.441 "data_offset": 0, 00:19:08.441 "data_size": 65536 00:19:08.441 } 00:19:08.441 ] 00:19:08.441 } 00:19:08.442 } 00:19:08.442 }' 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:08.442 BaseBdev2 00:19:08.442 BaseBdev3' 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.442 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.700 [2024-11-27 14:18:39.414717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.700 "name": "Existed_Raid", 00:19:08.700 "uuid": "372518d1-c127-4616-bb2a-8d98ee99c85f", 00:19:08.700 "strip_size_kb": 64, 00:19:08.700 "state": "online", 00:19:08.700 "raid_level": "raid5f", 00:19:08.700 "superblock": false, 00:19:08.700 "num_base_bdevs": 3, 00:19:08.700 "num_base_bdevs_discovered": 2, 00:19:08.700 "num_base_bdevs_operational": 2, 00:19:08.700 "base_bdevs_list": [ 00:19:08.700 { 00:19:08.700 "name": null, 00:19:08.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.700 "is_configured": false, 00:19:08.700 "data_offset": 0, 00:19:08.700 "data_size": 65536 00:19:08.700 }, 00:19:08.700 { 00:19:08.700 "name": "BaseBdev2", 00:19:08.700 "uuid": "70f18421-6bc1-4e5a-be05-f5a1fd06734c", 00:19:08.700 "is_configured": true, 00:19:08.700 "data_offset": 0, 00:19:08.700 "data_size": 65536 00:19:08.700 }, 00:19:08.700 { 00:19:08.700 "name": "BaseBdev3", 00:19:08.700 "uuid": "72a3ba69-d866-4d3a-955a-588f9e8f314c", 00:19:08.700 "is_configured": true, 00:19:08.700 "data_offset": 0, 00:19:08.700 "data_size": 65536 00:19:08.700 } 00:19:08.700 ] 00:19:08.700 }' 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.700 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.268 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:09.268 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:09.268 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.268 14:18:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:09.268 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.268 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.268 14:18:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.268 [2024-11-27 14:18:40.041807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:09.268 [2024-11-27 14:18:40.041921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:09.268 [2024-11-27 14:18:40.155658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.268 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.268 [2024-11-27 14:18:40.211624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:09.268 [2024-11-27 14:18:40.211688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.586 BaseBdev2 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.586 [ 00:19:09.586 { 00:19:09.586 "name": "BaseBdev2", 00:19:09.586 "aliases": [ 00:19:09.586 "c832d1ee-e085-45d3-a26b-d7019f3c916c" 00:19:09.586 ], 00:19:09.586 "product_name": "Malloc disk", 00:19:09.586 "block_size": 512, 00:19:09.586 "num_blocks": 65536, 00:19:09.586 "uuid": "c832d1ee-e085-45d3-a26b-d7019f3c916c", 00:19:09.586 "assigned_rate_limits": { 00:19:09.586 "rw_ios_per_sec": 0, 00:19:09.586 "rw_mbytes_per_sec": 0, 00:19:09.586 "r_mbytes_per_sec": 0, 00:19:09.586 "w_mbytes_per_sec": 0 00:19:09.586 }, 00:19:09.586 "claimed": false, 00:19:09.586 "zoned": false, 00:19:09.586 "supported_io_types": { 00:19:09.586 "read": true, 00:19:09.586 "write": true, 00:19:09.586 "unmap": true, 00:19:09.586 "flush": true, 00:19:09.586 "reset": true, 00:19:09.586 "nvme_admin": false, 00:19:09.586 "nvme_io": false, 00:19:09.586 "nvme_io_md": false, 00:19:09.586 "write_zeroes": true, 00:19:09.586 "zcopy": true, 00:19:09.586 "get_zone_info": false, 00:19:09.586 "zone_management": false, 00:19:09.586 "zone_append": false, 00:19:09.586 "compare": false, 00:19:09.586 "compare_and_write": false, 00:19:09.586 "abort": true, 00:19:09.586 "seek_hole": false, 00:19:09.586 "seek_data": false, 00:19:09.586 "copy": true, 00:19:09.586 "nvme_iov_md": false 00:19:09.586 }, 00:19:09.586 "memory_domains": [ 00:19:09.586 { 00:19:09.586 "dma_device_id": "system", 00:19:09.586 "dma_device_type": 1 00:19:09.586 }, 00:19:09.586 { 00:19:09.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.586 "dma_device_type": 2 00:19:09.586 } 00:19:09.586 ], 00:19:09.586 "driver_specific": {} 00:19:09.586 } 00:19:09.586 ] 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.586 BaseBdev3 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.586 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.845 [ 00:19:09.845 { 00:19:09.845 "name": "BaseBdev3", 00:19:09.845 "aliases": [ 00:19:09.845 "849e67dd-0f87-46a1-b919-3f658646cc78" 00:19:09.845 ], 00:19:09.845 "product_name": "Malloc disk", 00:19:09.845 "block_size": 512, 00:19:09.845 "num_blocks": 65536, 00:19:09.845 "uuid": "849e67dd-0f87-46a1-b919-3f658646cc78", 00:19:09.845 "assigned_rate_limits": { 00:19:09.845 "rw_ios_per_sec": 0, 00:19:09.845 "rw_mbytes_per_sec": 0, 00:19:09.845 "r_mbytes_per_sec": 0, 00:19:09.845 "w_mbytes_per_sec": 0 00:19:09.845 }, 00:19:09.845 "claimed": false, 00:19:09.845 "zoned": false, 00:19:09.845 "supported_io_types": { 00:19:09.845 "read": true, 00:19:09.845 "write": true, 00:19:09.845 "unmap": true, 00:19:09.845 "flush": true, 00:19:09.845 "reset": true, 00:19:09.845 "nvme_admin": false, 00:19:09.845 "nvme_io": false, 00:19:09.845 "nvme_io_md": false, 00:19:09.845 "write_zeroes": true, 00:19:09.845 "zcopy": true, 00:19:09.845 "get_zone_info": false, 00:19:09.845 "zone_management": false, 00:19:09.845 "zone_append": false, 00:19:09.845 "compare": false, 00:19:09.845 "compare_and_write": false, 00:19:09.845 "abort": true, 00:19:09.845 "seek_hole": false, 00:19:09.845 "seek_data": false, 00:19:09.845 "copy": true, 00:19:09.845 "nvme_iov_md": false 00:19:09.845 }, 00:19:09.845 "memory_domains": [ 00:19:09.845 { 00:19:09.845 "dma_device_id": "system", 00:19:09.845 "dma_device_type": 1 00:19:09.845 }, 00:19:09.845 { 00:19:09.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.845 "dma_device_type": 2 00:19:09.845 } 00:19:09.845 ], 00:19:09.845 "driver_specific": {} 00:19:09.845 } 00:19:09.845 ] 00:19:09.845 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.845 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:09.845 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:09.845 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:09.845 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:09.845 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.845 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.845 [2024-11-27 14:18:40.545242] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:09.845 [2024-11-27 14:18:40.545383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:09.846 [2024-11-27 14:18:40.545451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.846 [2024-11-27 14:18:40.547717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.846 "name": "Existed_Raid", 00:19:09.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.846 "strip_size_kb": 64, 00:19:09.846 "state": "configuring", 00:19:09.846 "raid_level": "raid5f", 00:19:09.846 "superblock": false, 00:19:09.846 "num_base_bdevs": 3, 00:19:09.846 "num_base_bdevs_discovered": 2, 00:19:09.846 "num_base_bdevs_operational": 3, 00:19:09.846 "base_bdevs_list": [ 00:19:09.846 { 00:19:09.846 "name": "BaseBdev1", 00:19:09.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.846 "is_configured": false, 00:19:09.846 "data_offset": 0, 00:19:09.846 "data_size": 0 00:19:09.846 }, 00:19:09.846 { 00:19:09.846 "name": "BaseBdev2", 00:19:09.846 "uuid": "c832d1ee-e085-45d3-a26b-d7019f3c916c", 00:19:09.846 "is_configured": true, 00:19:09.846 "data_offset": 0, 00:19:09.846 "data_size": 65536 00:19:09.846 }, 00:19:09.846 { 00:19:09.846 "name": "BaseBdev3", 00:19:09.846 "uuid": "849e67dd-0f87-46a1-b919-3f658646cc78", 00:19:09.846 "is_configured": true, 00:19:09.846 "data_offset": 0, 00:19:09.846 "data_size": 65536 00:19:09.846 } 00:19:09.846 ] 00:19:09.846 }' 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.846 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.104 [2024-11-27 14:18:40.972516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.104 14:18:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.104 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.104 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.104 "name": "Existed_Raid", 00:19:10.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.104 "strip_size_kb": 64, 00:19:10.104 "state": "configuring", 00:19:10.104 "raid_level": "raid5f", 00:19:10.104 "superblock": false, 00:19:10.104 "num_base_bdevs": 3, 00:19:10.104 "num_base_bdevs_discovered": 1, 00:19:10.104 "num_base_bdevs_operational": 3, 00:19:10.104 "base_bdevs_list": [ 00:19:10.104 { 00:19:10.104 "name": "BaseBdev1", 00:19:10.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.104 "is_configured": false, 00:19:10.104 "data_offset": 0, 00:19:10.105 "data_size": 0 00:19:10.105 }, 00:19:10.105 { 00:19:10.105 "name": null, 00:19:10.105 "uuid": "c832d1ee-e085-45d3-a26b-d7019f3c916c", 00:19:10.105 "is_configured": false, 00:19:10.105 "data_offset": 0, 00:19:10.105 "data_size": 65536 00:19:10.105 }, 00:19:10.105 { 00:19:10.105 "name": "BaseBdev3", 00:19:10.105 "uuid": "849e67dd-0f87-46a1-b919-3f658646cc78", 00:19:10.105 "is_configured": true, 00:19:10.105 "data_offset": 0, 00:19:10.105 "data_size": 65536 00:19:10.105 } 00:19:10.105 ] 00:19:10.105 }' 00:19:10.105 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.105 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 [2024-11-27 14:18:41.489900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.672 BaseBdev1 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 [ 00:19:10.672 { 00:19:10.672 "name": "BaseBdev1", 00:19:10.672 "aliases": [ 00:19:10.672 "819b6955-3086-4bf6-9d3a-2d7ae5c63e76" 00:19:10.672 ], 00:19:10.672 "product_name": "Malloc disk", 00:19:10.672 "block_size": 512, 00:19:10.672 "num_blocks": 65536, 00:19:10.672 "uuid": "819b6955-3086-4bf6-9d3a-2d7ae5c63e76", 00:19:10.672 "assigned_rate_limits": { 00:19:10.672 "rw_ios_per_sec": 0, 00:19:10.672 "rw_mbytes_per_sec": 0, 00:19:10.672 "r_mbytes_per_sec": 0, 00:19:10.672 "w_mbytes_per_sec": 0 00:19:10.672 }, 00:19:10.672 "claimed": true, 00:19:10.672 "claim_type": "exclusive_write", 00:19:10.672 "zoned": false, 00:19:10.672 "supported_io_types": { 00:19:10.672 "read": true, 00:19:10.672 "write": true, 00:19:10.672 "unmap": true, 00:19:10.672 "flush": true, 00:19:10.672 "reset": true, 00:19:10.672 "nvme_admin": false, 00:19:10.672 "nvme_io": false, 00:19:10.672 "nvme_io_md": false, 00:19:10.672 "write_zeroes": true, 00:19:10.672 "zcopy": true, 00:19:10.672 "get_zone_info": false, 00:19:10.672 "zone_management": false, 00:19:10.672 "zone_append": false, 00:19:10.672 "compare": false, 00:19:10.672 "compare_and_write": false, 00:19:10.672 "abort": true, 00:19:10.672 "seek_hole": false, 00:19:10.672 "seek_data": false, 00:19:10.672 "copy": true, 00:19:10.672 "nvme_iov_md": false 00:19:10.672 }, 00:19:10.672 "memory_domains": [ 00:19:10.672 { 00:19:10.672 "dma_device_id": "system", 00:19:10.672 "dma_device_type": 1 00:19:10.672 }, 00:19:10.672 { 00:19:10.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.672 "dma_device_type": 2 00:19:10.672 } 00:19:10.672 ], 00:19:10.672 "driver_specific": {} 00:19:10.672 } 00:19:10.672 ] 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.672 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.672 "name": "Existed_Raid", 00:19:10.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.672 "strip_size_kb": 64, 00:19:10.672 "state": "configuring", 00:19:10.672 "raid_level": "raid5f", 00:19:10.672 "superblock": false, 00:19:10.672 "num_base_bdevs": 3, 00:19:10.673 "num_base_bdevs_discovered": 2, 00:19:10.673 "num_base_bdevs_operational": 3, 00:19:10.673 "base_bdevs_list": [ 00:19:10.673 { 00:19:10.673 "name": "BaseBdev1", 00:19:10.673 "uuid": "819b6955-3086-4bf6-9d3a-2d7ae5c63e76", 00:19:10.673 "is_configured": true, 00:19:10.673 "data_offset": 0, 00:19:10.673 "data_size": 65536 00:19:10.673 }, 00:19:10.673 { 00:19:10.673 "name": null, 00:19:10.673 "uuid": "c832d1ee-e085-45d3-a26b-d7019f3c916c", 00:19:10.673 "is_configured": false, 00:19:10.673 "data_offset": 0, 00:19:10.673 "data_size": 65536 00:19:10.673 }, 00:19:10.673 { 00:19:10.673 "name": "BaseBdev3", 00:19:10.673 "uuid": "849e67dd-0f87-46a1-b919-3f658646cc78", 00:19:10.673 "is_configured": true, 00:19:10.673 "data_offset": 0, 00:19:10.673 "data_size": 65536 00:19:10.673 } 00:19:10.673 ] 00:19:10.673 }' 00:19:10.673 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.673 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.240 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.240 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.240 14:18:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.240 14:18:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.240 [2024-11-27 14:18:42.057039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.240 "name": "Existed_Raid", 00:19:11.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.240 "strip_size_kb": 64, 00:19:11.240 "state": "configuring", 00:19:11.240 "raid_level": "raid5f", 00:19:11.240 "superblock": false, 00:19:11.240 "num_base_bdevs": 3, 00:19:11.240 "num_base_bdevs_discovered": 1, 00:19:11.240 "num_base_bdevs_operational": 3, 00:19:11.240 "base_bdevs_list": [ 00:19:11.240 { 00:19:11.240 "name": "BaseBdev1", 00:19:11.240 "uuid": "819b6955-3086-4bf6-9d3a-2d7ae5c63e76", 00:19:11.240 "is_configured": true, 00:19:11.240 "data_offset": 0, 00:19:11.240 "data_size": 65536 00:19:11.240 }, 00:19:11.240 { 00:19:11.240 "name": null, 00:19:11.240 "uuid": "c832d1ee-e085-45d3-a26b-d7019f3c916c", 00:19:11.240 "is_configured": false, 00:19:11.240 "data_offset": 0, 00:19:11.240 "data_size": 65536 00:19:11.240 }, 00:19:11.240 { 00:19:11.240 "name": null, 00:19:11.240 "uuid": "849e67dd-0f87-46a1-b919-3f658646cc78", 00:19:11.240 "is_configured": false, 00:19:11.240 "data_offset": 0, 00:19:11.240 "data_size": 65536 00:19:11.240 } 00:19:11.240 ] 00:19:11.240 }' 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.240 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.808 [2024-11-27 14:18:42.524323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.808 "name": "Existed_Raid", 00:19:11.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.808 "strip_size_kb": 64, 00:19:11.808 "state": "configuring", 00:19:11.808 "raid_level": "raid5f", 00:19:11.808 "superblock": false, 00:19:11.808 "num_base_bdevs": 3, 00:19:11.808 "num_base_bdevs_discovered": 2, 00:19:11.808 "num_base_bdevs_operational": 3, 00:19:11.808 "base_bdevs_list": [ 00:19:11.808 { 00:19:11.808 "name": "BaseBdev1", 00:19:11.808 "uuid": "819b6955-3086-4bf6-9d3a-2d7ae5c63e76", 00:19:11.808 "is_configured": true, 00:19:11.808 "data_offset": 0, 00:19:11.808 "data_size": 65536 00:19:11.808 }, 00:19:11.808 { 00:19:11.808 "name": null, 00:19:11.808 "uuid": "c832d1ee-e085-45d3-a26b-d7019f3c916c", 00:19:11.808 "is_configured": false, 00:19:11.808 "data_offset": 0, 00:19:11.808 "data_size": 65536 00:19:11.808 }, 00:19:11.808 { 00:19:11.808 "name": "BaseBdev3", 00:19:11.808 "uuid": "849e67dd-0f87-46a1-b919-3f658646cc78", 00:19:11.808 "is_configured": true, 00:19:11.808 "data_offset": 0, 00:19:11.808 "data_size": 65536 00:19:11.808 } 00:19:11.808 ] 00:19:11.808 }' 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.808 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.067 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.067 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.067 14:18:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.067 14:18:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:12.067 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.327 [2024-11-27 14:18:43.043552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.327 "name": "Existed_Raid", 00:19:12.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.327 "strip_size_kb": 64, 00:19:12.327 "state": "configuring", 00:19:12.327 "raid_level": "raid5f", 00:19:12.327 "superblock": false, 00:19:12.327 "num_base_bdevs": 3, 00:19:12.327 "num_base_bdevs_discovered": 1, 00:19:12.327 "num_base_bdevs_operational": 3, 00:19:12.327 "base_bdevs_list": [ 00:19:12.327 { 00:19:12.327 "name": null, 00:19:12.327 "uuid": "819b6955-3086-4bf6-9d3a-2d7ae5c63e76", 00:19:12.327 "is_configured": false, 00:19:12.327 "data_offset": 0, 00:19:12.327 "data_size": 65536 00:19:12.327 }, 00:19:12.327 { 00:19:12.327 "name": null, 00:19:12.327 "uuid": "c832d1ee-e085-45d3-a26b-d7019f3c916c", 00:19:12.327 "is_configured": false, 00:19:12.327 "data_offset": 0, 00:19:12.327 "data_size": 65536 00:19:12.327 }, 00:19:12.327 { 00:19:12.327 "name": "BaseBdev3", 00:19:12.327 "uuid": "849e67dd-0f87-46a1-b919-3f658646cc78", 00:19:12.327 "is_configured": true, 00:19:12.327 "data_offset": 0, 00:19:12.327 "data_size": 65536 00:19:12.327 } 00:19:12.327 ] 00:19:12.327 }' 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.327 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.895 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.895 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.895 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.895 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:12.895 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.895 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:12.895 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:12.895 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.895 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.895 [2024-11-27 14:18:43.659748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.895 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.895 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.896 "name": "Existed_Raid", 00:19:12.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.896 "strip_size_kb": 64, 00:19:12.896 "state": "configuring", 00:19:12.896 "raid_level": "raid5f", 00:19:12.896 "superblock": false, 00:19:12.896 "num_base_bdevs": 3, 00:19:12.896 "num_base_bdevs_discovered": 2, 00:19:12.896 "num_base_bdevs_operational": 3, 00:19:12.896 "base_bdevs_list": [ 00:19:12.896 { 00:19:12.896 "name": null, 00:19:12.896 "uuid": "819b6955-3086-4bf6-9d3a-2d7ae5c63e76", 00:19:12.896 "is_configured": false, 00:19:12.896 "data_offset": 0, 00:19:12.896 "data_size": 65536 00:19:12.896 }, 00:19:12.896 { 00:19:12.896 "name": "BaseBdev2", 00:19:12.896 "uuid": "c832d1ee-e085-45d3-a26b-d7019f3c916c", 00:19:12.896 "is_configured": true, 00:19:12.896 "data_offset": 0, 00:19:12.896 "data_size": 65536 00:19:12.896 }, 00:19:12.896 { 00:19:12.896 "name": "BaseBdev3", 00:19:12.896 "uuid": "849e67dd-0f87-46a1-b919-3f658646cc78", 00:19:12.896 "is_configured": true, 00:19:12.896 "data_offset": 0, 00:19:12.896 "data_size": 65536 00:19:12.896 } 00:19:12.896 ] 00:19:12.896 }' 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.896 14:18:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.156 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.156 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.156 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.156 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 819b6955-3086-4bf6-9d3a-2d7ae5c63e76 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.415 [2024-11-27 14:18:44.236884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:13.415 [2024-11-27 14:18:44.236936] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:13.415 [2024-11-27 14:18:44.236946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:13.415 [2024-11-27 14:18:44.237244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:13.415 [2024-11-27 14:18:44.242830] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:13.415 [2024-11-27 14:18:44.242853] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:13.415 [2024-11-27 14:18:44.243144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.415 NewBaseBdev 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:13.415 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.416 [ 00:19:13.416 { 00:19:13.416 "name": "NewBaseBdev", 00:19:13.416 "aliases": [ 00:19:13.416 "819b6955-3086-4bf6-9d3a-2d7ae5c63e76" 00:19:13.416 ], 00:19:13.416 "product_name": "Malloc disk", 00:19:13.416 "block_size": 512, 00:19:13.416 "num_blocks": 65536, 00:19:13.416 "uuid": "819b6955-3086-4bf6-9d3a-2d7ae5c63e76", 00:19:13.416 "assigned_rate_limits": { 00:19:13.416 "rw_ios_per_sec": 0, 00:19:13.416 "rw_mbytes_per_sec": 0, 00:19:13.416 "r_mbytes_per_sec": 0, 00:19:13.416 "w_mbytes_per_sec": 0 00:19:13.416 }, 00:19:13.416 "claimed": true, 00:19:13.416 "claim_type": "exclusive_write", 00:19:13.416 "zoned": false, 00:19:13.416 "supported_io_types": { 00:19:13.416 "read": true, 00:19:13.416 "write": true, 00:19:13.416 "unmap": true, 00:19:13.416 "flush": true, 00:19:13.416 "reset": true, 00:19:13.416 "nvme_admin": false, 00:19:13.416 "nvme_io": false, 00:19:13.416 "nvme_io_md": false, 00:19:13.416 "write_zeroes": true, 00:19:13.416 "zcopy": true, 00:19:13.416 "get_zone_info": false, 00:19:13.416 "zone_management": false, 00:19:13.416 "zone_append": false, 00:19:13.416 "compare": false, 00:19:13.416 "compare_and_write": false, 00:19:13.416 "abort": true, 00:19:13.416 "seek_hole": false, 00:19:13.416 "seek_data": false, 00:19:13.416 "copy": true, 00:19:13.416 "nvme_iov_md": false 00:19:13.416 }, 00:19:13.416 "memory_domains": [ 00:19:13.416 { 00:19:13.416 "dma_device_id": "system", 00:19:13.416 "dma_device_type": 1 00:19:13.416 }, 00:19:13.416 { 00:19:13.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.416 "dma_device_type": 2 00:19:13.416 } 00:19:13.416 ], 00:19:13.416 "driver_specific": {} 00:19:13.416 } 00:19:13.416 ] 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.416 "name": "Existed_Raid", 00:19:13.416 "uuid": "1fed260a-efbd-4566-ba6d-8da4a9a7b349", 00:19:13.416 "strip_size_kb": 64, 00:19:13.416 "state": "online", 00:19:13.416 "raid_level": "raid5f", 00:19:13.416 "superblock": false, 00:19:13.416 "num_base_bdevs": 3, 00:19:13.416 "num_base_bdevs_discovered": 3, 00:19:13.416 "num_base_bdevs_operational": 3, 00:19:13.416 "base_bdevs_list": [ 00:19:13.416 { 00:19:13.416 "name": "NewBaseBdev", 00:19:13.416 "uuid": "819b6955-3086-4bf6-9d3a-2d7ae5c63e76", 00:19:13.416 "is_configured": true, 00:19:13.416 "data_offset": 0, 00:19:13.416 "data_size": 65536 00:19:13.416 }, 00:19:13.416 { 00:19:13.416 "name": "BaseBdev2", 00:19:13.416 "uuid": "c832d1ee-e085-45d3-a26b-d7019f3c916c", 00:19:13.416 "is_configured": true, 00:19:13.416 "data_offset": 0, 00:19:13.416 "data_size": 65536 00:19:13.416 }, 00:19:13.416 { 00:19:13.416 "name": "BaseBdev3", 00:19:13.416 "uuid": "849e67dd-0f87-46a1-b919-3f658646cc78", 00:19:13.416 "is_configured": true, 00:19:13.416 "data_offset": 0, 00:19:13.416 "data_size": 65536 00:19:13.416 } 00:19:13.416 ] 00:19:13.416 }' 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.416 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.983 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:13.984 [2024-11-27 14:18:44.733399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:13.984 "name": "Existed_Raid", 00:19:13.984 "aliases": [ 00:19:13.984 "1fed260a-efbd-4566-ba6d-8da4a9a7b349" 00:19:13.984 ], 00:19:13.984 "product_name": "Raid Volume", 00:19:13.984 "block_size": 512, 00:19:13.984 "num_blocks": 131072, 00:19:13.984 "uuid": "1fed260a-efbd-4566-ba6d-8da4a9a7b349", 00:19:13.984 "assigned_rate_limits": { 00:19:13.984 "rw_ios_per_sec": 0, 00:19:13.984 "rw_mbytes_per_sec": 0, 00:19:13.984 "r_mbytes_per_sec": 0, 00:19:13.984 "w_mbytes_per_sec": 0 00:19:13.984 }, 00:19:13.984 "claimed": false, 00:19:13.984 "zoned": false, 00:19:13.984 "supported_io_types": { 00:19:13.984 "read": true, 00:19:13.984 "write": true, 00:19:13.984 "unmap": false, 00:19:13.984 "flush": false, 00:19:13.984 "reset": true, 00:19:13.984 "nvme_admin": false, 00:19:13.984 "nvme_io": false, 00:19:13.984 "nvme_io_md": false, 00:19:13.984 "write_zeroes": true, 00:19:13.984 "zcopy": false, 00:19:13.984 "get_zone_info": false, 00:19:13.984 "zone_management": false, 00:19:13.984 "zone_append": false, 00:19:13.984 "compare": false, 00:19:13.984 "compare_and_write": false, 00:19:13.984 "abort": false, 00:19:13.984 "seek_hole": false, 00:19:13.984 "seek_data": false, 00:19:13.984 "copy": false, 00:19:13.984 "nvme_iov_md": false 00:19:13.984 }, 00:19:13.984 "driver_specific": { 00:19:13.984 "raid": { 00:19:13.984 "uuid": "1fed260a-efbd-4566-ba6d-8da4a9a7b349", 00:19:13.984 "strip_size_kb": 64, 00:19:13.984 "state": "online", 00:19:13.984 "raid_level": "raid5f", 00:19:13.984 "superblock": false, 00:19:13.984 "num_base_bdevs": 3, 00:19:13.984 "num_base_bdevs_discovered": 3, 00:19:13.984 "num_base_bdevs_operational": 3, 00:19:13.984 "base_bdevs_list": [ 00:19:13.984 { 00:19:13.984 "name": "NewBaseBdev", 00:19:13.984 "uuid": "819b6955-3086-4bf6-9d3a-2d7ae5c63e76", 00:19:13.984 "is_configured": true, 00:19:13.984 "data_offset": 0, 00:19:13.984 "data_size": 65536 00:19:13.984 }, 00:19:13.984 { 00:19:13.984 "name": "BaseBdev2", 00:19:13.984 "uuid": "c832d1ee-e085-45d3-a26b-d7019f3c916c", 00:19:13.984 "is_configured": true, 00:19:13.984 "data_offset": 0, 00:19:13.984 "data_size": 65536 00:19:13.984 }, 00:19:13.984 { 00:19:13.984 "name": "BaseBdev3", 00:19:13.984 "uuid": "849e67dd-0f87-46a1-b919-3f658646cc78", 00:19:13.984 "is_configured": true, 00:19:13.984 "data_offset": 0, 00:19:13.984 "data_size": 65536 00:19:13.984 } 00:19:13.984 ] 00:19:13.984 } 00:19:13.984 } 00:19:13.984 }' 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:13.984 BaseBdev2 00:19:13.984 BaseBdev3' 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.984 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.243 [2024-11-27 14:18:44.996695] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:14.243 [2024-11-27 14:18:44.996773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:14.243 [2024-11-27 14:18:44.996901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.243 [2024-11-27 14:18:44.997270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:14.243 [2024-11-27 14:18:44.997336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:14.243 14:18:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.243 14:18:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80112 00:19:14.243 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80112 ']' 00:19:14.243 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80112 00:19:14.243 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:14.243 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.243 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80112 00:19:14.243 killing process with pid 80112 00:19:14.243 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.243 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.243 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80112' 00:19:14.243 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80112 00:19:14.243 14:18:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80112 00:19:14.243 [2024-11-27 14:18:45.035962] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:14.502 [2024-11-27 14:18:45.376459] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:15.880 ************************************ 00:19:15.880 END TEST raid5f_state_function_test 00:19:15.880 ************************************ 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:15.880 00:19:15.880 real 0m11.075s 00:19:15.880 user 0m17.442s 00:19:15.880 sys 0m1.889s 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:15.880 14:18:46 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:19:15.880 14:18:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:15.880 14:18:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.880 14:18:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:15.880 ************************************ 00:19:15.880 START TEST raid5f_state_function_test_sb 00:19:15.880 ************************************ 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80739 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80739' 00:19:15.880 Process raid pid: 80739 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80739 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80739 ']' 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.880 14:18:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.880 [2024-11-27 14:18:46.796247] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:15.880 [2024-11-27 14:18:46.796379] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.139 [2024-11-27 14:18:46.949679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.139 [2024-11-27 14:18:47.077635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.397 [2024-11-27 14:18:47.285819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:16.397 [2024-11-27 14:18:47.285865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.964 [2024-11-27 14:18:47.650639] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:16.964 [2024-11-27 14:18:47.650760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:16.964 [2024-11-27 14:18:47.650806] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:16.964 [2024-11-27 14:18:47.650836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:16.964 [2024-11-27 14:18:47.650859] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:16.964 [2024-11-27 14:18:47.650885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.964 "name": "Existed_Raid", 00:19:16.964 "uuid": "b1a7828f-f5fc-45c8-8155-66957d487977", 00:19:16.964 "strip_size_kb": 64, 00:19:16.964 "state": "configuring", 00:19:16.964 "raid_level": "raid5f", 00:19:16.964 "superblock": true, 00:19:16.964 "num_base_bdevs": 3, 00:19:16.964 "num_base_bdevs_discovered": 0, 00:19:16.964 "num_base_bdevs_operational": 3, 00:19:16.964 "base_bdevs_list": [ 00:19:16.964 { 00:19:16.964 "name": "BaseBdev1", 00:19:16.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.964 "is_configured": false, 00:19:16.964 "data_offset": 0, 00:19:16.964 "data_size": 0 00:19:16.964 }, 00:19:16.964 { 00:19:16.964 "name": "BaseBdev2", 00:19:16.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.964 "is_configured": false, 00:19:16.964 "data_offset": 0, 00:19:16.964 "data_size": 0 00:19:16.964 }, 00:19:16.964 { 00:19:16.964 "name": "BaseBdev3", 00:19:16.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.964 "is_configured": false, 00:19:16.964 "data_offset": 0, 00:19:16.964 "data_size": 0 00:19:16.964 } 00:19:16.964 ] 00:19:16.964 }' 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.964 14:18:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.224 [2024-11-27 14:18:48.045882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:17.224 [2024-11-27 14:18:48.045992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.224 [2024-11-27 14:18:48.057877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:17.224 [2024-11-27 14:18:48.057930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:17.224 [2024-11-27 14:18:48.057940] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:17.224 [2024-11-27 14:18:48.057951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:17.224 [2024-11-27 14:18:48.057959] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:17.224 [2024-11-27 14:18:48.057968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.224 [2024-11-27 14:18:48.104574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.224 BaseBdev1 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.224 [ 00:19:17.224 { 00:19:17.224 "name": "BaseBdev1", 00:19:17.224 "aliases": [ 00:19:17.224 "3b08b82e-bebd-4e1c-b689-d5313349c9ba" 00:19:17.224 ], 00:19:17.224 "product_name": "Malloc disk", 00:19:17.224 "block_size": 512, 00:19:17.224 "num_blocks": 65536, 00:19:17.224 "uuid": "3b08b82e-bebd-4e1c-b689-d5313349c9ba", 00:19:17.224 "assigned_rate_limits": { 00:19:17.224 "rw_ios_per_sec": 0, 00:19:17.224 "rw_mbytes_per_sec": 0, 00:19:17.224 "r_mbytes_per_sec": 0, 00:19:17.224 "w_mbytes_per_sec": 0 00:19:17.224 }, 00:19:17.224 "claimed": true, 00:19:17.224 "claim_type": "exclusive_write", 00:19:17.224 "zoned": false, 00:19:17.224 "supported_io_types": { 00:19:17.224 "read": true, 00:19:17.224 "write": true, 00:19:17.224 "unmap": true, 00:19:17.224 "flush": true, 00:19:17.224 "reset": true, 00:19:17.224 "nvme_admin": false, 00:19:17.224 "nvme_io": false, 00:19:17.224 "nvme_io_md": false, 00:19:17.224 "write_zeroes": true, 00:19:17.224 "zcopy": true, 00:19:17.224 "get_zone_info": false, 00:19:17.224 "zone_management": false, 00:19:17.224 "zone_append": false, 00:19:17.224 "compare": false, 00:19:17.224 "compare_and_write": false, 00:19:17.224 "abort": true, 00:19:17.224 "seek_hole": false, 00:19:17.224 "seek_data": false, 00:19:17.224 "copy": true, 00:19:17.224 "nvme_iov_md": false 00:19:17.224 }, 00:19:17.224 "memory_domains": [ 00:19:17.224 { 00:19:17.224 "dma_device_id": "system", 00:19:17.224 "dma_device_type": 1 00:19:17.224 }, 00:19:17.224 { 00:19:17.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.224 "dma_device_type": 2 00:19:17.224 } 00:19:17.224 ], 00:19:17.224 "driver_specific": {} 00:19:17.224 } 00:19:17.224 ] 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.224 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.484 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.484 "name": "Existed_Raid", 00:19:17.484 "uuid": "56599ee1-6184-4f67-a7ae-86fb9bb1e3c7", 00:19:17.484 "strip_size_kb": 64, 00:19:17.484 "state": "configuring", 00:19:17.484 "raid_level": "raid5f", 00:19:17.484 "superblock": true, 00:19:17.484 "num_base_bdevs": 3, 00:19:17.484 "num_base_bdevs_discovered": 1, 00:19:17.484 "num_base_bdevs_operational": 3, 00:19:17.484 "base_bdevs_list": [ 00:19:17.484 { 00:19:17.484 "name": "BaseBdev1", 00:19:17.484 "uuid": "3b08b82e-bebd-4e1c-b689-d5313349c9ba", 00:19:17.484 "is_configured": true, 00:19:17.484 "data_offset": 2048, 00:19:17.484 "data_size": 63488 00:19:17.484 }, 00:19:17.484 { 00:19:17.484 "name": "BaseBdev2", 00:19:17.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.484 "is_configured": false, 00:19:17.484 "data_offset": 0, 00:19:17.484 "data_size": 0 00:19:17.484 }, 00:19:17.484 { 00:19:17.484 "name": "BaseBdev3", 00:19:17.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.484 "is_configured": false, 00:19:17.484 "data_offset": 0, 00:19:17.484 "data_size": 0 00:19:17.484 } 00:19:17.484 ] 00:19:17.484 }' 00:19:17.484 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.484 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.743 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:17.743 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.743 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.743 [2024-11-27 14:18:48.588037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:17.744 [2024-11-27 14:18:48.588111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.744 [2024-11-27 14:18:48.600105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.744 [2024-11-27 14:18:48.602300] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:17.744 [2024-11-27 14:18:48.602416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:17.744 [2024-11-27 14:18:48.602470] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:17.744 [2024-11-27 14:18:48.602513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.744 "name": "Existed_Raid", 00:19:17.744 "uuid": "c1af3008-e1e3-4691-aa7d-dbf3e7f17b31", 00:19:17.744 "strip_size_kb": 64, 00:19:17.744 "state": "configuring", 00:19:17.744 "raid_level": "raid5f", 00:19:17.744 "superblock": true, 00:19:17.744 "num_base_bdevs": 3, 00:19:17.744 "num_base_bdevs_discovered": 1, 00:19:17.744 "num_base_bdevs_operational": 3, 00:19:17.744 "base_bdevs_list": [ 00:19:17.744 { 00:19:17.744 "name": "BaseBdev1", 00:19:17.744 "uuid": "3b08b82e-bebd-4e1c-b689-d5313349c9ba", 00:19:17.744 "is_configured": true, 00:19:17.744 "data_offset": 2048, 00:19:17.744 "data_size": 63488 00:19:17.744 }, 00:19:17.744 { 00:19:17.744 "name": "BaseBdev2", 00:19:17.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.744 "is_configured": false, 00:19:17.744 "data_offset": 0, 00:19:17.744 "data_size": 0 00:19:17.744 }, 00:19:17.744 { 00:19:17.744 "name": "BaseBdev3", 00:19:17.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.744 "is_configured": false, 00:19:17.744 "data_offset": 0, 00:19:17.744 "data_size": 0 00:19:17.744 } 00:19:17.744 ] 00:19:17.744 }' 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.744 14:18:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.312 [2024-11-27 14:18:49.073758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.312 BaseBdev2 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.312 [ 00:19:18.312 { 00:19:18.312 "name": "BaseBdev2", 00:19:18.312 "aliases": [ 00:19:18.312 "697c0d2f-99b9-4442-bf2a-047f66665d30" 00:19:18.312 ], 00:19:18.312 "product_name": "Malloc disk", 00:19:18.312 "block_size": 512, 00:19:18.312 "num_blocks": 65536, 00:19:18.312 "uuid": "697c0d2f-99b9-4442-bf2a-047f66665d30", 00:19:18.312 "assigned_rate_limits": { 00:19:18.312 "rw_ios_per_sec": 0, 00:19:18.312 "rw_mbytes_per_sec": 0, 00:19:18.312 "r_mbytes_per_sec": 0, 00:19:18.312 "w_mbytes_per_sec": 0 00:19:18.312 }, 00:19:18.312 "claimed": true, 00:19:18.312 "claim_type": "exclusive_write", 00:19:18.312 "zoned": false, 00:19:18.312 "supported_io_types": { 00:19:18.312 "read": true, 00:19:18.312 "write": true, 00:19:18.312 "unmap": true, 00:19:18.312 "flush": true, 00:19:18.312 "reset": true, 00:19:18.312 "nvme_admin": false, 00:19:18.312 "nvme_io": false, 00:19:18.312 "nvme_io_md": false, 00:19:18.312 "write_zeroes": true, 00:19:18.312 "zcopy": true, 00:19:18.312 "get_zone_info": false, 00:19:18.312 "zone_management": false, 00:19:18.312 "zone_append": false, 00:19:18.312 "compare": false, 00:19:18.312 "compare_and_write": false, 00:19:18.312 "abort": true, 00:19:18.312 "seek_hole": false, 00:19:18.312 "seek_data": false, 00:19:18.312 "copy": true, 00:19:18.312 "nvme_iov_md": false 00:19:18.312 }, 00:19:18.312 "memory_domains": [ 00:19:18.312 { 00:19:18.312 "dma_device_id": "system", 00:19:18.312 "dma_device_type": 1 00:19:18.312 }, 00:19:18.312 { 00:19:18.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.312 "dma_device_type": 2 00:19:18.312 } 00:19:18.312 ], 00:19:18.312 "driver_specific": {} 00:19:18.312 } 00:19:18.312 ] 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.312 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.312 "name": "Existed_Raid", 00:19:18.312 "uuid": "c1af3008-e1e3-4691-aa7d-dbf3e7f17b31", 00:19:18.312 "strip_size_kb": 64, 00:19:18.312 "state": "configuring", 00:19:18.312 "raid_level": "raid5f", 00:19:18.312 "superblock": true, 00:19:18.312 "num_base_bdevs": 3, 00:19:18.312 "num_base_bdevs_discovered": 2, 00:19:18.312 "num_base_bdevs_operational": 3, 00:19:18.312 "base_bdevs_list": [ 00:19:18.312 { 00:19:18.312 "name": "BaseBdev1", 00:19:18.312 "uuid": "3b08b82e-bebd-4e1c-b689-d5313349c9ba", 00:19:18.312 "is_configured": true, 00:19:18.312 "data_offset": 2048, 00:19:18.312 "data_size": 63488 00:19:18.312 }, 00:19:18.312 { 00:19:18.312 "name": "BaseBdev2", 00:19:18.312 "uuid": "697c0d2f-99b9-4442-bf2a-047f66665d30", 00:19:18.312 "is_configured": true, 00:19:18.312 "data_offset": 2048, 00:19:18.312 "data_size": 63488 00:19:18.312 }, 00:19:18.312 { 00:19:18.312 "name": "BaseBdev3", 00:19:18.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.312 "is_configured": false, 00:19:18.312 "data_offset": 0, 00:19:18.312 "data_size": 0 00:19:18.312 } 00:19:18.312 ] 00:19:18.312 }' 00:19:18.313 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.313 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.881 [2024-11-27 14:18:49.584214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:18.881 [2024-11-27 14:18:49.584517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:18.881 [2024-11-27 14:18:49.584541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:18.881 [2024-11-27 14:18:49.584844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:18.881 BaseBdev3 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.881 [2024-11-27 14:18:49.591051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:18.881 [2024-11-27 14:18:49.591126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:18.881 [2024-11-27 14:18:49.591371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.881 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.881 [ 00:19:18.881 { 00:19:18.881 "name": "BaseBdev3", 00:19:18.881 "aliases": [ 00:19:18.881 "48e60bd9-22dd-4533-9854-bd64fe6a33cb" 00:19:18.881 ], 00:19:18.881 "product_name": "Malloc disk", 00:19:18.881 "block_size": 512, 00:19:18.881 "num_blocks": 65536, 00:19:18.881 "uuid": "48e60bd9-22dd-4533-9854-bd64fe6a33cb", 00:19:18.881 "assigned_rate_limits": { 00:19:18.881 "rw_ios_per_sec": 0, 00:19:18.881 "rw_mbytes_per_sec": 0, 00:19:18.881 "r_mbytes_per_sec": 0, 00:19:18.881 "w_mbytes_per_sec": 0 00:19:18.881 }, 00:19:18.881 "claimed": true, 00:19:18.881 "claim_type": "exclusive_write", 00:19:18.881 "zoned": false, 00:19:18.881 "supported_io_types": { 00:19:18.881 "read": true, 00:19:18.881 "write": true, 00:19:18.881 "unmap": true, 00:19:18.881 "flush": true, 00:19:18.881 "reset": true, 00:19:18.881 "nvme_admin": false, 00:19:18.881 "nvme_io": false, 00:19:18.881 "nvme_io_md": false, 00:19:18.881 "write_zeroes": true, 00:19:18.881 "zcopy": true, 00:19:18.881 "get_zone_info": false, 00:19:18.881 "zone_management": false, 00:19:18.881 "zone_append": false, 00:19:18.881 "compare": false, 00:19:18.881 "compare_and_write": false, 00:19:18.881 "abort": true, 00:19:18.881 "seek_hole": false, 00:19:18.881 "seek_data": false, 00:19:18.881 "copy": true, 00:19:18.881 "nvme_iov_md": false 00:19:18.881 }, 00:19:18.881 "memory_domains": [ 00:19:18.881 { 00:19:18.881 "dma_device_id": "system", 00:19:18.881 "dma_device_type": 1 00:19:18.881 }, 00:19:18.881 { 00:19:18.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.881 "dma_device_type": 2 00:19:18.881 } 00:19:18.881 ], 00:19:18.882 "driver_specific": {} 00:19:18.882 } 00:19:18.882 ] 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.882 "name": "Existed_Raid", 00:19:18.882 "uuid": "c1af3008-e1e3-4691-aa7d-dbf3e7f17b31", 00:19:18.882 "strip_size_kb": 64, 00:19:18.882 "state": "online", 00:19:18.882 "raid_level": "raid5f", 00:19:18.882 "superblock": true, 00:19:18.882 "num_base_bdevs": 3, 00:19:18.882 "num_base_bdevs_discovered": 3, 00:19:18.882 "num_base_bdevs_operational": 3, 00:19:18.882 "base_bdevs_list": [ 00:19:18.882 { 00:19:18.882 "name": "BaseBdev1", 00:19:18.882 "uuid": "3b08b82e-bebd-4e1c-b689-d5313349c9ba", 00:19:18.882 "is_configured": true, 00:19:18.882 "data_offset": 2048, 00:19:18.882 "data_size": 63488 00:19:18.882 }, 00:19:18.882 { 00:19:18.882 "name": "BaseBdev2", 00:19:18.882 "uuid": "697c0d2f-99b9-4442-bf2a-047f66665d30", 00:19:18.882 "is_configured": true, 00:19:18.882 "data_offset": 2048, 00:19:18.882 "data_size": 63488 00:19:18.882 }, 00:19:18.882 { 00:19:18.882 "name": "BaseBdev3", 00:19:18.882 "uuid": "48e60bd9-22dd-4533-9854-bd64fe6a33cb", 00:19:18.882 "is_configured": true, 00:19:18.882 "data_offset": 2048, 00:19:18.882 "data_size": 63488 00:19:18.882 } 00:19:18.882 ] 00:19:18.882 }' 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.882 14:18:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.141 [2024-11-27 14:18:50.065456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:19.141 "name": "Existed_Raid", 00:19:19.141 "aliases": [ 00:19:19.141 "c1af3008-e1e3-4691-aa7d-dbf3e7f17b31" 00:19:19.141 ], 00:19:19.141 "product_name": "Raid Volume", 00:19:19.141 "block_size": 512, 00:19:19.141 "num_blocks": 126976, 00:19:19.141 "uuid": "c1af3008-e1e3-4691-aa7d-dbf3e7f17b31", 00:19:19.141 "assigned_rate_limits": { 00:19:19.141 "rw_ios_per_sec": 0, 00:19:19.141 "rw_mbytes_per_sec": 0, 00:19:19.141 "r_mbytes_per_sec": 0, 00:19:19.141 "w_mbytes_per_sec": 0 00:19:19.141 }, 00:19:19.141 "claimed": false, 00:19:19.141 "zoned": false, 00:19:19.141 "supported_io_types": { 00:19:19.141 "read": true, 00:19:19.141 "write": true, 00:19:19.141 "unmap": false, 00:19:19.141 "flush": false, 00:19:19.141 "reset": true, 00:19:19.141 "nvme_admin": false, 00:19:19.141 "nvme_io": false, 00:19:19.141 "nvme_io_md": false, 00:19:19.141 "write_zeroes": true, 00:19:19.141 "zcopy": false, 00:19:19.141 "get_zone_info": false, 00:19:19.141 "zone_management": false, 00:19:19.141 "zone_append": false, 00:19:19.141 "compare": false, 00:19:19.141 "compare_and_write": false, 00:19:19.141 "abort": false, 00:19:19.141 "seek_hole": false, 00:19:19.141 "seek_data": false, 00:19:19.141 "copy": false, 00:19:19.141 "nvme_iov_md": false 00:19:19.141 }, 00:19:19.141 "driver_specific": { 00:19:19.141 "raid": { 00:19:19.141 "uuid": "c1af3008-e1e3-4691-aa7d-dbf3e7f17b31", 00:19:19.141 "strip_size_kb": 64, 00:19:19.141 "state": "online", 00:19:19.141 "raid_level": "raid5f", 00:19:19.141 "superblock": true, 00:19:19.141 "num_base_bdevs": 3, 00:19:19.141 "num_base_bdevs_discovered": 3, 00:19:19.141 "num_base_bdevs_operational": 3, 00:19:19.141 "base_bdevs_list": [ 00:19:19.141 { 00:19:19.141 "name": "BaseBdev1", 00:19:19.141 "uuid": "3b08b82e-bebd-4e1c-b689-d5313349c9ba", 00:19:19.141 "is_configured": true, 00:19:19.141 "data_offset": 2048, 00:19:19.141 "data_size": 63488 00:19:19.141 }, 00:19:19.141 { 00:19:19.141 "name": "BaseBdev2", 00:19:19.141 "uuid": "697c0d2f-99b9-4442-bf2a-047f66665d30", 00:19:19.141 "is_configured": true, 00:19:19.141 "data_offset": 2048, 00:19:19.141 "data_size": 63488 00:19:19.141 }, 00:19:19.141 { 00:19:19.141 "name": "BaseBdev3", 00:19:19.141 "uuid": "48e60bd9-22dd-4533-9854-bd64fe6a33cb", 00:19:19.141 "is_configured": true, 00:19:19.141 "data_offset": 2048, 00:19:19.141 "data_size": 63488 00:19:19.141 } 00:19:19.141 ] 00:19:19.141 } 00:19:19.141 } 00:19:19.141 }' 00:19:19.141 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:19.401 BaseBdev2 00:19:19.401 BaseBdev3' 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:19.401 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:19.402 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:19.402 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.402 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.402 [2024-11-27 14:18:50.288882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.660 "name": "Existed_Raid", 00:19:19.660 "uuid": "c1af3008-e1e3-4691-aa7d-dbf3e7f17b31", 00:19:19.660 "strip_size_kb": 64, 00:19:19.660 "state": "online", 00:19:19.660 "raid_level": "raid5f", 00:19:19.660 "superblock": true, 00:19:19.660 "num_base_bdevs": 3, 00:19:19.660 "num_base_bdevs_discovered": 2, 00:19:19.660 "num_base_bdevs_operational": 2, 00:19:19.660 "base_bdevs_list": [ 00:19:19.660 { 00:19:19.660 "name": null, 00:19:19.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.660 "is_configured": false, 00:19:19.660 "data_offset": 0, 00:19:19.660 "data_size": 63488 00:19:19.660 }, 00:19:19.660 { 00:19:19.660 "name": "BaseBdev2", 00:19:19.660 "uuid": "697c0d2f-99b9-4442-bf2a-047f66665d30", 00:19:19.660 "is_configured": true, 00:19:19.660 "data_offset": 2048, 00:19:19.660 "data_size": 63488 00:19:19.660 }, 00:19:19.660 { 00:19:19.660 "name": "BaseBdev3", 00:19:19.660 "uuid": "48e60bd9-22dd-4533-9854-bd64fe6a33cb", 00:19:19.660 "is_configured": true, 00:19:19.660 "data_offset": 2048, 00:19:19.660 "data_size": 63488 00:19:19.660 } 00:19:19.660 ] 00:19:19.660 }' 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.660 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.919 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.919 [2024-11-27 14:18:50.872285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:19.919 [2024-11-27 14:18:50.872529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.179 [2024-11-27 14:18:50.982350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.179 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.179 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:20.179 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:20.179 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.179 14:18:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:20.179 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.179 14:18:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.179 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.179 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:20.180 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:20.180 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:20.180 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.180 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.180 [2024-11-27 14:18:51.042346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:20.180 [2024-11-27 14:18:51.042407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:20.438 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.438 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.439 BaseBdev2 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.439 [ 00:19:20.439 { 00:19:20.439 "name": "BaseBdev2", 00:19:20.439 "aliases": [ 00:19:20.439 "841d64f5-2850-4b9a-87f8-eef2428f6805" 00:19:20.439 ], 00:19:20.439 "product_name": "Malloc disk", 00:19:20.439 "block_size": 512, 00:19:20.439 "num_blocks": 65536, 00:19:20.439 "uuid": "841d64f5-2850-4b9a-87f8-eef2428f6805", 00:19:20.439 "assigned_rate_limits": { 00:19:20.439 "rw_ios_per_sec": 0, 00:19:20.439 "rw_mbytes_per_sec": 0, 00:19:20.439 "r_mbytes_per_sec": 0, 00:19:20.439 "w_mbytes_per_sec": 0 00:19:20.439 }, 00:19:20.439 "claimed": false, 00:19:20.439 "zoned": false, 00:19:20.439 "supported_io_types": { 00:19:20.439 "read": true, 00:19:20.439 "write": true, 00:19:20.439 "unmap": true, 00:19:20.439 "flush": true, 00:19:20.439 "reset": true, 00:19:20.439 "nvme_admin": false, 00:19:20.439 "nvme_io": false, 00:19:20.439 "nvme_io_md": false, 00:19:20.439 "write_zeroes": true, 00:19:20.439 "zcopy": true, 00:19:20.439 "get_zone_info": false, 00:19:20.439 "zone_management": false, 00:19:20.439 "zone_append": false, 00:19:20.439 "compare": false, 00:19:20.439 "compare_and_write": false, 00:19:20.439 "abort": true, 00:19:20.439 "seek_hole": false, 00:19:20.439 "seek_data": false, 00:19:20.439 "copy": true, 00:19:20.439 "nvme_iov_md": false 00:19:20.439 }, 00:19:20.439 "memory_domains": [ 00:19:20.439 { 00:19:20.439 "dma_device_id": "system", 00:19:20.439 "dma_device_type": 1 00:19:20.439 }, 00:19:20.439 { 00:19:20.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.439 "dma_device_type": 2 00:19:20.439 } 00:19:20.439 ], 00:19:20.439 "driver_specific": {} 00:19:20.439 } 00:19:20.439 ] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.439 BaseBdev3 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.439 [ 00:19:20.439 { 00:19:20.439 "name": "BaseBdev3", 00:19:20.439 "aliases": [ 00:19:20.439 "bbb6c156-09d0-47a3-b84d-35da1e5296a6" 00:19:20.439 ], 00:19:20.439 "product_name": "Malloc disk", 00:19:20.439 "block_size": 512, 00:19:20.439 "num_blocks": 65536, 00:19:20.439 "uuid": "bbb6c156-09d0-47a3-b84d-35da1e5296a6", 00:19:20.439 "assigned_rate_limits": { 00:19:20.439 "rw_ios_per_sec": 0, 00:19:20.439 "rw_mbytes_per_sec": 0, 00:19:20.439 "r_mbytes_per_sec": 0, 00:19:20.439 "w_mbytes_per_sec": 0 00:19:20.439 }, 00:19:20.439 "claimed": false, 00:19:20.439 "zoned": false, 00:19:20.439 "supported_io_types": { 00:19:20.439 "read": true, 00:19:20.439 "write": true, 00:19:20.439 "unmap": true, 00:19:20.439 "flush": true, 00:19:20.439 "reset": true, 00:19:20.439 "nvme_admin": false, 00:19:20.439 "nvme_io": false, 00:19:20.439 "nvme_io_md": false, 00:19:20.439 "write_zeroes": true, 00:19:20.439 "zcopy": true, 00:19:20.439 "get_zone_info": false, 00:19:20.439 "zone_management": false, 00:19:20.439 "zone_append": false, 00:19:20.439 "compare": false, 00:19:20.439 "compare_and_write": false, 00:19:20.439 "abort": true, 00:19:20.439 "seek_hole": false, 00:19:20.439 "seek_data": false, 00:19:20.439 "copy": true, 00:19:20.439 "nvme_iov_md": false 00:19:20.439 }, 00:19:20.439 "memory_domains": [ 00:19:20.439 { 00:19:20.439 "dma_device_id": "system", 00:19:20.439 "dma_device_type": 1 00:19:20.439 }, 00:19:20.439 { 00:19:20.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.439 "dma_device_type": 2 00:19:20.439 } 00:19:20.439 ], 00:19:20.439 "driver_specific": {} 00:19:20.439 } 00:19:20.439 ] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.439 [2024-11-27 14:18:51.364919] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:20.439 [2024-11-27 14:18:51.364973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:20.439 [2024-11-27 14:18:51.365001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.439 [2024-11-27 14:18:51.367156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.439 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.440 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.440 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.440 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.440 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.440 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.440 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.440 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.699 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.699 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.699 "name": "Existed_Raid", 00:19:20.699 "uuid": "8efcb094-063c-4192-a27c-45b24ec2d654", 00:19:20.699 "strip_size_kb": 64, 00:19:20.699 "state": "configuring", 00:19:20.699 "raid_level": "raid5f", 00:19:20.699 "superblock": true, 00:19:20.699 "num_base_bdevs": 3, 00:19:20.699 "num_base_bdevs_discovered": 2, 00:19:20.699 "num_base_bdevs_operational": 3, 00:19:20.699 "base_bdevs_list": [ 00:19:20.699 { 00:19:20.699 "name": "BaseBdev1", 00:19:20.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.699 "is_configured": false, 00:19:20.699 "data_offset": 0, 00:19:20.699 "data_size": 0 00:19:20.699 }, 00:19:20.699 { 00:19:20.699 "name": "BaseBdev2", 00:19:20.699 "uuid": "841d64f5-2850-4b9a-87f8-eef2428f6805", 00:19:20.699 "is_configured": true, 00:19:20.699 "data_offset": 2048, 00:19:20.699 "data_size": 63488 00:19:20.699 }, 00:19:20.699 { 00:19:20.699 "name": "BaseBdev3", 00:19:20.699 "uuid": "bbb6c156-09d0-47a3-b84d-35da1e5296a6", 00:19:20.699 "is_configured": true, 00:19:20.699 "data_offset": 2048, 00:19:20.699 "data_size": 63488 00:19:20.699 } 00:19:20.699 ] 00:19:20.699 }' 00:19:20.699 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.699 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.957 [2024-11-27 14:18:51.792240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.957 "name": "Existed_Raid", 00:19:20.957 "uuid": "8efcb094-063c-4192-a27c-45b24ec2d654", 00:19:20.957 "strip_size_kb": 64, 00:19:20.957 "state": "configuring", 00:19:20.957 "raid_level": "raid5f", 00:19:20.957 "superblock": true, 00:19:20.957 "num_base_bdevs": 3, 00:19:20.957 "num_base_bdevs_discovered": 1, 00:19:20.957 "num_base_bdevs_operational": 3, 00:19:20.957 "base_bdevs_list": [ 00:19:20.957 { 00:19:20.957 "name": "BaseBdev1", 00:19:20.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.957 "is_configured": false, 00:19:20.957 "data_offset": 0, 00:19:20.957 "data_size": 0 00:19:20.957 }, 00:19:20.957 { 00:19:20.957 "name": null, 00:19:20.957 "uuid": "841d64f5-2850-4b9a-87f8-eef2428f6805", 00:19:20.957 "is_configured": false, 00:19:20.957 "data_offset": 0, 00:19:20.957 "data_size": 63488 00:19:20.957 }, 00:19:20.957 { 00:19:20.957 "name": "BaseBdev3", 00:19:20.957 "uuid": "bbb6c156-09d0-47a3-b84d-35da1e5296a6", 00:19:20.957 "is_configured": true, 00:19:20.957 "data_offset": 2048, 00:19:20.957 "data_size": 63488 00:19:20.957 } 00:19:20.957 ] 00:19:20.957 }' 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.957 14:18:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.544 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.544 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.545 [2024-11-27 14:18:52.306682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:21.545 BaseBdev1 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.545 [ 00:19:21.545 { 00:19:21.545 "name": "BaseBdev1", 00:19:21.545 "aliases": [ 00:19:21.545 "1282f626-c232-4993-b5c2-625a2f4c66dc" 00:19:21.545 ], 00:19:21.545 "product_name": "Malloc disk", 00:19:21.545 "block_size": 512, 00:19:21.545 "num_blocks": 65536, 00:19:21.545 "uuid": "1282f626-c232-4993-b5c2-625a2f4c66dc", 00:19:21.545 "assigned_rate_limits": { 00:19:21.545 "rw_ios_per_sec": 0, 00:19:21.545 "rw_mbytes_per_sec": 0, 00:19:21.545 "r_mbytes_per_sec": 0, 00:19:21.545 "w_mbytes_per_sec": 0 00:19:21.545 }, 00:19:21.545 "claimed": true, 00:19:21.545 "claim_type": "exclusive_write", 00:19:21.545 "zoned": false, 00:19:21.545 "supported_io_types": { 00:19:21.545 "read": true, 00:19:21.545 "write": true, 00:19:21.545 "unmap": true, 00:19:21.545 "flush": true, 00:19:21.545 "reset": true, 00:19:21.545 "nvme_admin": false, 00:19:21.545 "nvme_io": false, 00:19:21.545 "nvme_io_md": false, 00:19:21.545 "write_zeroes": true, 00:19:21.545 "zcopy": true, 00:19:21.545 "get_zone_info": false, 00:19:21.545 "zone_management": false, 00:19:21.545 "zone_append": false, 00:19:21.545 "compare": false, 00:19:21.545 "compare_and_write": false, 00:19:21.545 "abort": true, 00:19:21.545 "seek_hole": false, 00:19:21.545 "seek_data": false, 00:19:21.545 "copy": true, 00:19:21.545 "nvme_iov_md": false 00:19:21.545 }, 00:19:21.545 "memory_domains": [ 00:19:21.545 { 00:19:21.545 "dma_device_id": "system", 00:19:21.545 "dma_device_type": 1 00:19:21.545 }, 00:19:21.545 { 00:19:21.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.545 "dma_device_type": 2 00:19:21.545 } 00:19:21.545 ], 00:19:21.545 "driver_specific": {} 00:19:21.545 } 00:19:21.545 ] 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.545 "name": "Existed_Raid", 00:19:21.545 "uuid": "8efcb094-063c-4192-a27c-45b24ec2d654", 00:19:21.545 "strip_size_kb": 64, 00:19:21.545 "state": "configuring", 00:19:21.545 "raid_level": "raid5f", 00:19:21.545 "superblock": true, 00:19:21.545 "num_base_bdevs": 3, 00:19:21.545 "num_base_bdevs_discovered": 2, 00:19:21.545 "num_base_bdevs_operational": 3, 00:19:21.545 "base_bdevs_list": [ 00:19:21.545 { 00:19:21.545 "name": "BaseBdev1", 00:19:21.545 "uuid": "1282f626-c232-4993-b5c2-625a2f4c66dc", 00:19:21.545 "is_configured": true, 00:19:21.545 "data_offset": 2048, 00:19:21.545 "data_size": 63488 00:19:21.545 }, 00:19:21.545 { 00:19:21.545 "name": null, 00:19:21.545 "uuid": "841d64f5-2850-4b9a-87f8-eef2428f6805", 00:19:21.545 "is_configured": false, 00:19:21.545 "data_offset": 0, 00:19:21.545 "data_size": 63488 00:19:21.545 }, 00:19:21.545 { 00:19:21.545 "name": "BaseBdev3", 00:19:21.545 "uuid": "bbb6c156-09d0-47a3-b84d-35da1e5296a6", 00:19:21.545 "is_configured": true, 00:19:21.545 "data_offset": 2048, 00:19:21.545 "data_size": 63488 00:19:21.545 } 00:19:21.545 ] 00:19:21.545 }' 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.545 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.113 [2024-11-27 14:18:52.865816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.113 "name": "Existed_Raid", 00:19:22.113 "uuid": "8efcb094-063c-4192-a27c-45b24ec2d654", 00:19:22.113 "strip_size_kb": 64, 00:19:22.113 "state": "configuring", 00:19:22.113 "raid_level": "raid5f", 00:19:22.113 "superblock": true, 00:19:22.113 "num_base_bdevs": 3, 00:19:22.113 "num_base_bdevs_discovered": 1, 00:19:22.113 "num_base_bdevs_operational": 3, 00:19:22.113 "base_bdevs_list": [ 00:19:22.113 { 00:19:22.113 "name": "BaseBdev1", 00:19:22.113 "uuid": "1282f626-c232-4993-b5c2-625a2f4c66dc", 00:19:22.113 "is_configured": true, 00:19:22.113 "data_offset": 2048, 00:19:22.113 "data_size": 63488 00:19:22.113 }, 00:19:22.113 { 00:19:22.113 "name": null, 00:19:22.113 "uuid": "841d64f5-2850-4b9a-87f8-eef2428f6805", 00:19:22.113 "is_configured": false, 00:19:22.113 "data_offset": 0, 00:19:22.113 "data_size": 63488 00:19:22.113 }, 00:19:22.113 { 00:19:22.113 "name": null, 00:19:22.113 "uuid": "bbb6c156-09d0-47a3-b84d-35da1e5296a6", 00:19:22.113 "is_configured": false, 00:19:22.113 "data_offset": 0, 00:19:22.113 "data_size": 63488 00:19:22.113 } 00:19:22.113 ] 00:19:22.113 }' 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.113 14:18:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.372 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.372 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.372 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.372 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:22.372 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.631 [2024-11-27 14:18:53.341077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.631 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.632 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.632 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.632 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.632 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.632 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.632 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.632 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.632 "name": "Existed_Raid", 00:19:22.632 "uuid": "8efcb094-063c-4192-a27c-45b24ec2d654", 00:19:22.632 "strip_size_kb": 64, 00:19:22.632 "state": "configuring", 00:19:22.632 "raid_level": "raid5f", 00:19:22.632 "superblock": true, 00:19:22.632 "num_base_bdevs": 3, 00:19:22.632 "num_base_bdevs_discovered": 2, 00:19:22.632 "num_base_bdevs_operational": 3, 00:19:22.632 "base_bdevs_list": [ 00:19:22.632 { 00:19:22.632 "name": "BaseBdev1", 00:19:22.632 "uuid": "1282f626-c232-4993-b5c2-625a2f4c66dc", 00:19:22.632 "is_configured": true, 00:19:22.632 "data_offset": 2048, 00:19:22.632 "data_size": 63488 00:19:22.632 }, 00:19:22.632 { 00:19:22.632 "name": null, 00:19:22.632 "uuid": "841d64f5-2850-4b9a-87f8-eef2428f6805", 00:19:22.632 "is_configured": false, 00:19:22.632 "data_offset": 0, 00:19:22.632 "data_size": 63488 00:19:22.632 }, 00:19:22.632 { 00:19:22.632 "name": "BaseBdev3", 00:19:22.632 "uuid": "bbb6c156-09d0-47a3-b84d-35da1e5296a6", 00:19:22.632 "is_configured": true, 00:19:22.632 "data_offset": 2048, 00:19:22.632 "data_size": 63488 00:19:22.632 } 00:19:22.632 ] 00:19:22.632 }' 00:19:22.632 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.632 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.891 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.891 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:22.891 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.891 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.150 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.150 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:23.150 14:18:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:23.150 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.150 14:18:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.150 [2024-11-27 14:18:53.888174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:23.150 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.150 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.151 "name": "Existed_Raid", 00:19:23.151 "uuid": "8efcb094-063c-4192-a27c-45b24ec2d654", 00:19:23.151 "strip_size_kb": 64, 00:19:23.151 "state": "configuring", 00:19:23.151 "raid_level": "raid5f", 00:19:23.151 "superblock": true, 00:19:23.151 "num_base_bdevs": 3, 00:19:23.151 "num_base_bdevs_discovered": 1, 00:19:23.151 "num_base_bdevs_operational": 3, 00:19:23.151 "base_bdevs_list": [ 00:19:23.151 { 00:19:23.151 "name": null, 00:19:23.151 "uuid": "1282f626-c232-4993-b5c2-625a2f4c66dc", 00:19:23.151 "is_configured": false, 00:19:23.151 "data_offset": 0, 00:19:23.151 "data_size": 63488 00:19:23.151 }, 00:19:23.151 { 00:19:23.151 "name": null, 00:19:23.151 "uuid": "841d64f5-2850-4b9a-87f8-eef2428f6805", 00:19:23.151 "is_configured": false, 00:19:23.151 "data_offset": 0, 00:19:23.151 "data_size": 63488 00:19:23.151 }, 00:19:23.151 { 00:19:23.151 "name": "BaseBdev3", 00:19:23.151 "uuid": "bbb6c156-09d0-47a3-b84d-35da1e5296a6", 00:19:23.151 "is_configured": true, 00:19:23.151 "data_offset": 2048, 00:19:23.151 "data_size": 63488 00:19:23.151 } 00:19:23.151 ] 00:19:23.151 }' 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.151 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.718 [2024-11-27 14:18:54.468597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.718 "name": "Existed_Raid", 00:19:23.718 "uuid": "8efcb094-063c-4192-a27c-45b24ec2d654", 00:19:23.718 "strip_size_kb": 64, 00:19:23.718 "state": "configuring", 00:19:23.718 "raid_level": "raid5f", 00:19:23.718 "superblock": true, 00:19:23.718 "num_base_bdevs": 3, 00:19:23.718 "num_base_bdevs_discovered": 2, 00:19:23.718 "num_base_bdevs_operational": 3, 00:19:23.718 "base_bdevs_list": [ 00:19:23.718 { 00:19:23.718 "name": null, 00:19:23.718 "uuid": "1282f626-c232-4993-b5c2-625a2f4c66dc", 00:19:23.718 "is_configured": false, 00:19:23.718 "data_offset": 0, 00:19:23.718 "data_size": 63488 00:19:23.718 }, 00:19:23.718 { 00:19:23.718 "name": "BaseBdev2", 00:19:23.718 "uuid": "841d64f5-2850-4b9a-87f8-eef2428f6805", 00:19:23.718 "is_configured": true, 00:19:23.718 "data_offset": 2048, 00:19:23.718 "data_size": 63488 00:19:23.718 }, 00:19:23.718 { 00:19:23.718 "name": "BaseBdev3", 00:19:23.718 "uuid": "bbb6c156-09d0-47a3-b84d-35da1e5296a6", 00:19:23.718 "is_configured": true, 00:19:23.718 "data_offset": 2048, 00:19:23.718 "data_size": 63488 00:19:23.718 } 00:19:23.718 ] 00:19:23.718 }' 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.718 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.977 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.977 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:23.977 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.977 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.237 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.237 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:24.237 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.237 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:24.237 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.237 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.237 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.237 14:18:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1282f626-c232-4993-b5c2-625a2f4c66dc 00:19:24.237 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.237 14:18:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.237 [2024-11-27 14:18:55.036482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:24.237 [2024-11-27 14:18:55.036865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:24.237 [2024-11-27 14:18:55.036931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:24.237 [2024-11-27 14:18:55.037262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:24.237 NewBaseBdev 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.237 [2024-11-27 14:18:55.043330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:24.237 [2024-11-27 14:18:55.043391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:24.237 [2024-11-27 14:18:55.043644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.237 [ 00:19:24.237 { 00:19:24.237 "name": "NewBaseBdev", 00:19:24.237 "aliases": [ 00:19:24.237 "1282f626-c232-4993-b5c2-625a2f4c66dc" 00:19:24.237 ], 00:19:24.237 "product_name": "Malloc disk", 00:19:24.237 "block_size": 512, 00:19:24.237 "num_blocks": 65536, 00:19:24.237 "uuid": "1282f626-c232-4993-b5c2-625a2f4c66dc", 00:19:24.237 "assigned_rate_limits": { 00:19:24.237 "rw_ios_per_sec": 0, 00:19:24.237 "rw_mbytes_per_sec": 0, 00:19:24.237 "r_mbytes_per_sec": 0, 00:19:24.237 "w_mbytes_per_sec": 0 00:19:24.237 }, 00:19:24.237 "claimed": true, 00:19:24.237 "claim_type": "exclusive_write", 00:19:24.237 "zoned": false, 00:19:24.237 "supported_io_types": { 00:19:24.237 "read": true, 00:19:24.237 "write": true, 00:19:24.237 "unmap": true, 00:19:24.237 "flush": true, 00:19:24.237 "reset": true, 00:19:24.237 "nvme_admin": false, 00:19:24.237 "nvme_io": false, 00:19:24.237 "nvme_io_md": false, 00:19:24.237 "write_zeroes": true, 00:19:24.237 "zcopy": true, 00:19:24.237 "get_zone_info": false, 00:19:24.237 "zone_management": false, 00:19:24.237 "zone_append": false, 00:19:24.237 "compare": false, 00:19:24.237 "compare_and_write": false, 00:19:24.237 "abort": true, 00:19:24.237 "seek_hole": false, 00:19:24.237 "seek_data": false, 00:19:24.237 "copy": true, 00:19:24.237 "nvme_iov_md": false 00:19:24.237 }, 00:19:24.237 "memory_domains": [ 00:19:24.237 { 00:19:24.237 "dma_device_id": "system", 00:19:24.237 "dma_device_type": 1 00:19:24.237 }, 00:19:24.237 { 00:19:24.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.237 "dma_device_type": 2 00:19:24.237 } 00:19:24.237 ], 00:19:24.237 "driver_specific": {} 00:19:24.237 } 00:19:24.237 ] 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.237 "name": "Existed_Raid", 00:19:24.237 "uuid": "8efcb094-063c-4192-a27c-45b24ec2d654", 00:19:24.237 "strip_size_kb": 64, 00:19:24.237 "state": "online", 00:19:24.237 "raid_level": "raid5f", 00:19:24.237 "superblock": true, 00:19:24.237 "num_base_bdevs": 3, 00:19:24.237 "num_base_bdevs_discovered": 3, 00:19:24.237 "num_base_bdevs_operational": 3, 00:19:24.237 "base_bdevs_list": [ 00:19:24.237 { 00:19:24.237 "name": "NewBaseBdev", 00:19:24.237 "uuid": "1282f626-c232-4993-b5c2-625a2f4c66dc", 00:19:24.237 "is_configured": true, 00:19:24.237 "data_offset": 2048, 00:19:24.237 "data_size": 63488 00:19:24.237 }, 00:19:24.237 { 00:19:24.237 "name": "BaseBdev2", 00:19:24.237 "uuid": "841d64f5-2850-4b9a-87f8-eef2428f6805", 00:19:24.237 "is_configured": true, 00:19:24.237 "data_offset": 2048, 00:19:24.237 "data_size": 63488 00:19:24.237 }, 00:19:24.237 { 00:19:24.237 "name": "BaseBdev3", 00:19:24.237 "uuid": "bbb6c156-09d0-47a3-b84d-35da1e5296a6", 00:19:24.237 "is_configured": true, 00:19:24.237 "data_offset": 2048, 00:19:24.237 "data_size": 63488 00:19:24.237 } 00:19:24.237 ] 00:19:24.237 }' 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.237 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.804 [2024-11-27 14:18:55.526021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:24.804 "name": "Existed_Raid", 00:19:24.804 "aliases": [ 00:19:24.804 "8efcb094-063c-4192-a27c-45b24ec2d654" 00:19:24.804 ], 00:19:24.804 "product_name": "Raid Volume", 00:19:24.804 "block_size": 512, 00:19:24.804 "num_blocks": 126976, 00:19:24.804 "uuid": "8efcb094-063c-4192-a27c-45b24ec2d654", 00:19:24.804 "assigned_rate_limits": { 00:19:24.804 "rw_ios_per_sec": 0, 00:19:24.804 "rw_mbytes_per_sec": 0, 00:19:24.804 "r_mbytes_per_sec": 0, 00:19:24.804 "w_mbytes_per_sec": 0 00:19:24.804 }, 00:19:24.804 "claimed": false, 00:19:24.804 "zoned": false, 00:19:24.804 "supported_io_types": { 00:19:24.804 "read": true, 00:19:24.804 "write": true, 00:19:24.804 "unmap": false, 00:19:24.804 "flush": false, 00:19:24.804 "reset": true, 00:19:24.804 "nvme_admin": false, 00:19:24.804 "nvme_io": false, 00:19:24.804 "nvme_io_md": false, 00:19:24.804 "write_zeroes": true, 00:19:24.804 "zcopy": false, 00:19:24.804 "get_zone_info": false, 00:19:24.804 "zone_management": false, 00:19:24.804 "zone_append": false, 00:19:24.804 "compare": false, 00:19:24.804 "compare_and_write": false, 00:19:24.804 "abort": false, 00:19:24.804 "seek_hole": false, 00:19:24.804 "seek_data": false, 00:19:24.804 "copy": false, 00:19:24.804 "nvme_iov_md": false 00:19:24.804 }, 00:19:24.804 "driver_specific": { 00:19:24.804 "raid": { 00:19:24.804 "uuid": "8efcb094-063c-4192-a27c-45b24ec2d654", 00:19:24.804 "strip_size_kb": 64, 00:19:24.804 "state": "online", 00:19:24.804 "raid_level": "raid5f", 00:19:24.804 "superblock": true, 00:19:24.804 "num_base_bdevs": 3, 00:19:24.804 "num_base_bdevs_discovered": 3, 00:19:24.804 "num_base_bdevs_operational": 3, 00:19:24.804 "base_bdevs_list": [ 00:19:24.804 { 00:19:24.804 "name": "NewBaseBdev", 00:19:24.804 "uuid": "1282f626-c232-4993-b5c2-625a2f4c66dc", 00:19:24.804 "is_configured": true, 00:19:24.804 "data_offset": 2048, 00:19:24.804 "data_size": 63488 00:19:24.804 }, 00:19:24.804 { 00:19:24.804 "name": "BaseBdev2", 00:19:24.804 "uuid": "841d64f5-2850-4b9a-87f8-eef2428f6805", 00:19:24.804 "is_configured": true, 00:19:24.804 "data_offset": 2048, 00:19:24.804 "data_size": 63488 00:19:24.804 }, 00:19:24.804 { 00:19:24.804 "name": "BaseBdev3", 00:19:24.804 "uuid": "bbb6c156-09d0-47a3-b84d-35da1e5296a6", 00:19:24.804 "is_configured": true, 00:19:24.804 "data_offset": 2048, 00:19:24.804 "data_size": 63488 00:19:24.804 } 00:19:24.804 ] 00:19:24.804 } 00:19:24.804 } 00:19:24.804 }' 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:24.804 BaseBdev2 00:19:24.804 BaseBdev3' 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.804 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.064 [2024-11-27 14:18:55.773354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.064 [2024-11-27 14:18:55.773383] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.064 [2024-11-27 14:18:55.773463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.064 [2024-11-27 14:18:55.773752] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.064 [2024-11-27 14:18:55.773766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80739 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80739 ']' 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80739 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80739 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80739' 00:19:25.064 killing process with pid 80739 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80739 00:19:25.064 [2024-11-27 14:18:55.807899] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.064 14:18:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80739 00:19:25.322 [2024-11-27 14:18:56.116867] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:26.755 ************************************ 00:19:26.755 END TEST raid5f_state_function_test_sb 00:19:26.755 ************************************ 00:19:26.755 14:18:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:26.755 00:19:26.755 real 0m10.598s 00:19:26.755 user 0m16.700s 00:19:26.755 sys 0m1.875s 00:19:26.755 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.755 14:18:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.755 14:18:57 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:19:26.755 14:18:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:26.755 14:18:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.755 14:18:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.755 ************************************ 00:19:26.755 START TEST raid5f_superblock_test 00:19:26.755 ************************************ 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81355 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81355 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81355 ']' 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.755 14:18:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.756 [2024-11-27 14:18:57.443100] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:26.756 [2024-11-27 14:18:57.443259] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81355 ] 00:19:26.756 [2024-11-27 14:18:57.620358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.014 [2024-11-27 14:18:57.740224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.014 [2024-11-27 14:18:57.949033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.014 [2024-11-27 14:18:57.949077] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.582 malloc1 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.582 [2024-11-27 14:18:58.350012] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:27.582 [2024-11-27 14:18:58.350161] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.582 [2024-11-27 14:18:58.350213] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:27.582 [2024-11-27 14:18:58.350259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.582 [2024-11-27 14:18:58.352456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.582 [2024-11-27 14:18:58.352547] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:27.582 pt1 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.582 malloc2 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.582 [2024-11-27 14:18:58.408889] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:27.582 [2024-11-27 14:18:58.408955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.582 [2024-11-27 14:18:58.408984] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:27.582 [2024-11-27 14:18:58.408994] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.582 [2024-11-27 14:18:58.411316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.582 [2024-11-27 14:18:58.411358] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:27.582 pt2 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:27.582 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.583 malloc3 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.583 [2024-11-27 14:18:58.480318] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:27.583 [2024-11-27 14:18:58.480428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.583 [2024-11-27 14:18:58.480474] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:27.583 [2024-11-27 14:18:58.480508] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.583 [2024-11-27 14:18:58.482846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.583 [2024-11-27 14:18:58.482923] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:27.583 pt3 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.583 [2024-11-27 14:18:58.492338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:27.583 [2024-11-27 14:18:58.494187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:27.583 [2024-11-27 14:18:58.494314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:27.583 [2024-11-27 14:18:58.494510] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:27.583 [2024-11-27 14:18:58.494571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:27.583 [2024-11-27 14:18:58.494832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:27.583 [2024-11-27 14:18:58.500891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:27.583 [2024-11-27 14:18:58.500976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:27.583 [2024-11-27 14:18:58.501318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.583 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.842 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.842 "name": "raid_bdev1", 00:19:27.842 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:27.842 "strip_size_kb": 64, 00:19:27.842 "state": "online", 00:19:27.842 "raid_level": "raid5f", 00:19:27.842 "superblock": true, 00:19:27.842 "num_base_bdevs": 3, 00:19:27.842 "num_base_bdevs_discovered": 3, 00:19:27.842 "num_base_bdevs_operational": 3, 00:19:27.842 "base_bdevs_list": [ 00:19:27.842 { 00:19:27.842 "name": "pt1", 00:19:27.842 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:27.842 "is_configured": true, 00:19:27.842 "data_offset": 2048, 00:19:27.842 "data_size": 63488 00:19:27.842 }, 00:19:27.842 { 00:19:27.842 "name": "pt2", 00:19:27.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:27.842 "is_configured": true, 00:19:27.842 "data_offset": 2048, 00:19:27.843 "data_size": 63488 00:19:27.843 }, 00:19:27.843 { 00:19:27.843 "name": "pt3", 00:19:27.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:27.843 "is_configured": true, 00:19:27.843 "data_offset": 2048, 00:19:27.843 "data_size": 63488 00:19:27.843 } 00:19:27.843 ] 00:19:27.843 }' 00:19:27.843 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.843 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.102 [2024-11-27 14:18:58.975974] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.102 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:28.102 "name": "raid_bdev1", 00:19:28.102 "aliases": [ 00:19:28.102 "cf1d093e-b96b-4a6b-b4d5-a02acb14a709" 00:19:28.102 ], 00:19:28.102 "product_name": "Raid Volume", 00:19:28.102 "block_size": 512, 00:19:28.102 "num_blocks": 126976, 00:19:28.102 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:28.102 "assigned_rate_limits": { 00:19:28.102 "rw_ios_per_sec": 0, 00:19:28.102 "rw_mbytes_per_sec": 0, 00:19:28.102 "r_mbytes_per_sec": 0, 00:19:28.102 "w_mbytes_per_sec": 0 00:19:28.102 }, 00:19:28.102 "claimed": false, 00:19:28.102 "zoned": false, 00:19:28.102 "supported_io_types": { 00:19:28.102 "read": true, 00:19:28.102 "write": true, 00:19:28.102 "unmap": false, 00:19:28.102 "flush": false, 00:19:28.102 "reset": true, 00:19:28.102 "nvme_admin": false, 00:19:28.102 "nvme_io": false, 00:19:28.102 "nvme_io_md": false, 00:19:28.102 "write_zeroes": true, 00:19:28.102 "zcopy": false, 00:19:28.102 "get_zone_info": false, 00:19:28.102 "zone_management": false, 00:19:28.102 "zone_append": false, 00:19:28.102 "compare": false, 00:19:28.102 "compare_and_write": false, 00:19:28.102 "abort": false, 00:19:28.102 "seek_hole": false, 00:19:28.102 "seek_data": false, 00:19:28.102 "copy": false, 00:19:28.102 "nvme_iov_md": false 00:19:28.102 }, 00:19:28.102 "driver_specific": { 00:19:28.102 "raid": { 00:19:28.102 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:28.102 "strip_size_kb": 64, 00:19:28.102 "state": "online", 00:19:28.102 "raid_level": "raid5f", 00:19:28.102 "superblock": true, 00:19:28.102 "num_base_bdevs": 3, 00:19:28.102 "num_base_bdevs_discovered": 3, 00:19:28.103 "num_base_bdevs_operational": 3, 00:19:28.103 "base_bdevs_list": [ 00:19:28.103 { 00:19:28.103 "name": "pt1", 00:19:28.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:28.103 "is_configured": true, 00:19:28.103 "data_offset": 2048, 00:19:28.103 "data_size": 63488 00:19:28.103 }, 00:19:28.103 { 00:19:28.103 "name": "pt2", 00:19:28.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.103 "is_configured": true, 00:19:28.103 "data_offset": 2048, 00:19:28.103 "data_size": 63488 00:19:28.103 }, 00:19:28.103 { 00:19:28.103 "name": "pt3", 00:19:28.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:28.103 "is_configured": true, 00:19:28.103 "data_offset": 2048, 00:19:28.103 "data_size": 63488 00:19:28.103 } 00:19:28.103 ] 00:19:28.103 } 00:19:28.103 } 00:19:28.103 }' 00:19:28.103 14:18:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:28.103 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:28.103 pt2 00:19:28.103 pt3' 00:19:28.103 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.362 [2024-11-27 14:18:59.247386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cf1d093e-b96b-4a6b-b4d5-a02acb14a709 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cf1d093e-b96b-4a6b-b4d5-a02acb14a709 ']' 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.362 [2024-11-27 14:18:59.295105] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:28.362 [2024-11-27 14:18:59.295149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.362 [2024-11-27 14:18:59.295248] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.362 [2024-11-27 14:18:59.295324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.362 [2024-11-27 14:18:59.295335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.362 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.622 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.623 [2024-11-27 14:18:59.442936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:28.623 [2024-11-27 14:18:59.445052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:28.623 [2024-11-27 14:18:59.445175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:28.623 [2024-11-27 14:18:59.445258] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:28.623 [2024-11-27 14:18:59.445358] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:28.623 [2024-11-27 14:18:59.445437] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:28.623 [2024-11-27 14:18:59.445495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:28.623 [2024-11-27 14:18:59.445508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:28.623 request: 00:19:28.623 { 00:19:28.623 "name": "raid_bdev1", 00:19:28.623 "raid_level": "raid5f", 00:19:28.623 "base_bdevs": [ 00:19:28.623 "malloc1", 00:19:28.623 "malloc2", 00:19:28.623 "malloc3" 00:19:28.623 ], 00:19:28.623 "strip_size_kb": 64, 00:19:28.623 "superblock": false, 00:19:28.623 "method": "bdev_raid_create", 00:19:28.623 "req_id": 1 00:19:28.623 } 00:19:28.623 Got JSON-RPC error response 00:19:28.623 response: 00:19:28.623 { 00:19:28.623 "code": -17, 00:19:28.623 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:28.623 } 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.623 [2024-11-27 14:18:59.510742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:28.623 [2024-11-27 14:18:59.510878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.623 [2024-11-27 14:18:59.510921] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:28.623 [2024-11-27 14:18:59.510954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.623 [2024-11-27 14:18:59.513389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.623 [2024-11-27 14:18:59.513482] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:28.623 [2024-11-27 14:18:59.513613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:28.623 [2024-11-27 14:18:59.513705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:28.623 pt1 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.623 "name": "raid_bdev1", 00:19:28.623 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:28.623 "strip_size_kb": 64, 00:19:28.623 "state": "configuring", 00:19:28.623 "raid_level": "raid5f", 00:19:28.623 "superblock": true, 00:19:28.623 "num_base_bdevs": 3, 00:19:28.623 "num_base_bdevs_discovered": 1, 00:19:28.623 "num_base_bdevs_operational": 3, 00:19:28.623 "base_bdevs_list": [ 00:19:28.623 { 00:19:28.623 "name": "pt1", 00:19:28.623 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:28.623 "is_configured": true, 00:19:28.623 "data_offset": 2048, 00:19:28.623 "data_size": 63488 00:19:28.623 }, 00:19:28.623 { 00:19:28.623 "name": null, 00:19:28.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.623 "is_configured": false, 00:19:28.623 "data_offset": 2048, 00:19:28.623 "data_size": 63488 00:19:28.623 }, 00:19:28.623 { 00:19:28.623 "name": null, 00:19:28.623 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:28.623 "is_configured": false, 00:19:28.623 "data_offset": 2048, 00:19:28.623 "data_size": 63488 00:19:28.623 } 00:19:28.623 ] 00:19:28.623 }' 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.623 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.192 [2024-11-27 14:18:59.910080] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:29.192 [2024-11-27 14:18:59.910160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.192 [2024-11-27 14:18:59.910187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:29.192 [2024-11-27 14:18:59.910199] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.192 [2024-11-27 14:18:59.910689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.192 [2024-11-27 14:18:59.910732] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:29.192 [2024-11-27 14:18:59.910833] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:29.192 [2024-11-27 14:18:59.910870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:29.192 pt2 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.192 [2024-11-27 14:18:59.918070] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.192 "name": "raid_bdev1", 00:19:29.192 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:29.192 "strip_size_kb": 64, 00:19:29.192 "state": "configuring", 00:19:29.192 "raid_level": "raid5f", 00:19:29.192 "superblock": true, 00:19:29.192 "num_base_bdevs": 3, 00:19:29.192 "num_base_bdevs_discovered": 1, 00:19:29.192 "num_base_bdevs_operational": 3, 00:19:29.192 "base_bdevs_list": [ 00:19:29.192 { 00:19:29.192 "name": "pt1", 00:19:29.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:29.192 "is_configured": true, 00:19:29.192 "data_offset": 2048, 00:19:29.192 "data_size": 63488 00:19:29.192 }, 00:19:29.192 { 00:19:29.192 "name": null, 00:19:29.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.192 "is_configured": false, 00:19:29.192 "data_offset": 0, 00:19:29.192 "data_size": 63488 00:19:29.192 }, 00:19:29.192 { 00:19:29.192 "name": null, 00:19:29.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:29.192 "is_configured": false, 00:19:29.192 "data_offset": 2048, 00:19:29.192 "data_size": 63488 00:19:29.192 } 00:19:29.192 ] 00:19:29.192 }' 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.192 14:18:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.759 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:29.759 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:29.759 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:29.759 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.759 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.760 [2024-11-27 14:19:00.433186] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:29.760 [2024-11-27 14:19:00.433359] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.760 [2024-11-27 14:19:00.433403] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:29.760 [2024-11-27 14:19:00.433451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.760 [2024-11-27 14:19:00.433995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.760 [2024-11-27 14:19:00.434063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:29.760 [2024-11-27 14:19:00.434204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:29.760 [2024-11-27 14:19:00.434263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:29.760 pt2 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.760 [2024-11-27 14:19:00.445139] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:29.760 [2024-11-27 14:19:00.445191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.760 [2024-11-27 14:19:00.445207] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:29.760 [2024-11-27 14:19:00.445220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.760 [2024-11-27 14:19:00.445614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.760 [2024-11-27 14:19:00.445636] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:29.760 [2024-11-27 14:19:00.445713] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:29.760 [2024-11-27 14:19:00.445736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:29.760 [2024-11-27 14:19:00.445888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:29.760 [2024-11-27 14:19:00.445901] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:29.760 [2024-11-27 14:19:00.446158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:29.760 [2024-11-27 14:19:00.452067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:29.760 [2024-11-27 14:19:00.452163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:29.760 [2024-11-27 14:19:00.452438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.760 pt3 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.760 "name": "raid_bdev1", 00:19:29.760 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:29.760 "strip_size_kb": 64, 00:19:29.760 "state": "online", 00:19:29.760 "raid_level": "raid5f", 00:19:29.760 "superblock": true, 00:19:29.760 "num_base_bdevs": 3, 00:19:29.760 "num_base_bdevs_discovered": 3, 00:19:29.760 "num_base_bdevs_operational": 3, 00:19:29.760 "base_bdevs_list": [ 00:19:29.760 { 00:19:29.760 "name": "pt1", 00:19:29.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:29.760 "is_configured": true, 00:19:29.760 "data_offset": 2048, 00:19:29.760 "data_size": 63488 00:19:29.760 }, 00:19:29.760 { 00:19:29.760 "name": "pt2", 00:19:29.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.760 "is_configured": true, 00:19:29.760 "data_offset": 2048, 00:19:29.760 "data_size": 63488 00:19:29.760 }, 00:19:29.760 { 00:19:29.760 "name": "pt3", 00:19:29.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:29.760 "is_configured": true, 00:19:29.760 "data_offset": 2048, 00:19:29.760 "data_size": 63488 00:19:29.760 } 00:19:29.760 ] 00:19:29.760 }' 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.760 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.019 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:30.019 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:30.020 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:30.020 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:30.020 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:30.020 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:30.020 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:30.020 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:30.020 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.020 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.020 [2024-11-27 14:19:00.938505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.020 14:19:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.020 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:30.020 "name": "raid_bdev1", 00:19:30.020 "aliases": [ 00:19:30.020 "cf1d093e-b96b-4a6b-b4d5-a02acb14a709" 00:19:30.020 ], 00:19:30.020 "product_name": "Raid Volume", 00:19:30.020 "block_size": 512, 00:19:30.020 "num_blocks": 126976, 00:19:30.020 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:30.020 "assigned_rate_limits": { 00:19:30.020 "rw_ios_per_sec": 0, 00:19:30.020 "rw_mbytes_per_sec": 0, 00:19:30.020 "r_mbytes_per_sec": 0, 00:19:30.020 "w_mbytes_per_sec": 0 00:19:30.020 }, 00:19:30.020 "claimed": false, 00:19:30.020 "zoned": false, 00:19:30.020 "supported_io_types": { 00:19:30.020 "read": true, 00:19:30.020 "write": true, 00:19:30.020 "unmap": false, 00:19:30.020 "flush": false, 00:19:30.020 "reset": true, 00:19:30.020 "nvme_admin": false, 00:19:30.020 "nvme_io": false, 00:19:30.020 "nvme_io_md": false, 00:19:30.020 "write_zeroes": true, 00:19:30.020 "zcopy": false, 00:19:30.020 "get_zone_info": false, 00:19:30.020 "zone_management": false, 00:19:30.020 "zone_append": false, 00:19:30.020 "compare": false, 00:19:30.020 "compare_and_write": false, 00:19:30.020 "abort": false, 00:19:30.020 "seek_hole": false, 00:19:30.020 "seek_data": false, 00:19:30.020 "copy": false, 00:19:30.020 "nvme_iov_md": false 00:19:30.020 }, 00:19:30.020 "driver_specific": { 00:19:30.020 "raid": { 00:19:30.020 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:30.020 "strip_size_kb": 64, 00:19:30.020 "state": "online", 00:19:30.020 "raid_level": "raid5f", 00:19:30.020 "superblock": true, 00:19:30.020 "num_base_bdevs": 3, 00:19:30.020 "num_base_bdevs_discovered": 3, 00:19:30.020 "num_base_bdevs_operational": 3, 00:19:30.020 "base_bdevs_list": [ 00:19:30.020 { 00:19:30.020 "name": "pt1", 00:19:30.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.020 "is_configured": true, 00:19:30.020 "data_offset": 2048, 00:19:30.020 "data_size": 63488 00:19:30.020 }, 00:19:30.020 { 00:19:30.020 "name": "pt2", 00:19:30.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.020 "is_configured": true, 00:19:30.020 "data_offset": 2048, 00:19:30.020 "data_size": 63488 00:19:30.020 }, 00:19:30.020 { 00:19:30.020 "name": "pt3", 00:19:30.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:30.020 "is_configured": true, 00:19:30.020 "data_offset": 2048, 00:19:30.020 "data_size": 63488 00:19:30.020 } 00:19:30.020 ] 00:19:30.020 } 00:19:30.020 } 00:19:30.020 }' 00:19:30.020 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:30.280 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:30.280 pt2 00:19:30.280 pt3' 00:19:30.280 14:19:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:30.280 [2024-11-27 14:19:01.201993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.280 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cf1d093e-b96b-4a6b-b4d5-a02acb14a709 '!=' cf1d093e-b96b-4a6b-b4d5-a02acb14a709 ']' 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.539 [2024-11-27 14:19:01.249794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.539 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.540 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.540 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.540 "name": "raid_bdev1", 00:19:30.540 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:30.540 "strip_size_kb": 64, 00:19:30.540 "state": "online", 00:19:30.540 "raid_level": "raid5f", 00:19:30.540 "superblock": true, 00:19:30.540 "num_base_bdevs": 3, 00:19:30.540 "num_base_bdevs_discovered": 2, 00:19:30.540 "num_base_bdevs_operational": 2, 00:19:30.540 "base_bdevs_list": [ 00:19:30.540 { 00:19:30.540 "name": null, 00:19:30.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.540 "is_configured": false, 00:19:30.540 "data_offset": 0, 00:19:30.540 "data_size": 63488 00:19:30.540 }, 00:19:30.540 { 00:19:30.540 "name": "pt2", 00:19:30.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.540 "is_configured": true, 00:19:30.540 "data_offset": 2048, 00:19:30.540 "data_size": 63488 00:19:30.540 }, 00:19:30.540 { 00:19:30.540 "name": "pt3", 00:19:30.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:30.540 "is_configured": true, 00:19:30.540 "data_offset": 2048, 00:19:30.540 "data_size": 63488 00:19:30.540 } 00:19:30.540 ] 00:19:30.540 }' 00:19:30.540 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.540 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.798 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:30.798 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.798 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.798 [2024-11-27 14:19:01.653068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:30.798 [2024-11-27 14:19:01.653186] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:30.798 [2024-11-27 14:19:01.653310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.798 [2024-11-27 14:19:01.653398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.798 [2024-11-27 14:19:01.653460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:30.798 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.798 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.798 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:30.798 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.798 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.798 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.799 [2024-11-27 14:19:01.732951] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:30.799 [2024-11-27 14:19:01.733098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.799 [2024-11-27 14:19:01.733133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:30.799 [2024-11-27 14:19:01.733147] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.799 [2024-11-27 14:19:01.735563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.799 [2024-11-27 14:19:01.735612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:30.799 [2024-11-27 14:19:01.735714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:30.799 [2024-11-27 14:19:01.735768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:30.799 pt2 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.799 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.076 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.076 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.076 "name": "raid_bdev1", 00:19:31.076 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:31.076 "strip_size_kb": 64, 00:19:31.076 "state": "configuring", 00:19:31.076 "raid_level": "raid5f", 00:19:31.076 "superblock": true, 00:19:31.076 "num_base_bdevs": 3, 00:19:31.076 "num_base_bdevs_discovered": 1, 00:19:31.076 "num_base_bdevs_operational": 2, 00:19:31.076 "base_bdevs_list": [ 00:19:31.076 { 00:19:31.076 "name": null, 00:19:31.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.076 "is_configured": false, 00:19:31.076 "data_offset": 2048, 00:19:31.076 "data_size": 63488 00:19:31.076 }, 00:19:31.076 { 00:19:31.076 "name": "pt2", 00:19:31.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.076 "is_configured": true, 00:19:31.076 "data_offset": 2048, 00:19:31.076 "data_size": 63488 00:19:31.076 }, 00:19:31.076 { 00:19:31.076 "name": null, 00:19:31.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:31.076 "is_configured": false, 00:19:31.076 "data_offset": 2048, 00:19:31.076 "data_size": 63488 00:19:31.076 } 00:19:31.076 ] 00:19:31.076 }' 00:19:31.076 14:19:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.076 14:19:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.335 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:31.335 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:31.335 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:19:31.335 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:31.335 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.335 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.335 [2024-11-27 14:19:02.200142] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:31.335 [2024-11-27 14:19:02.200277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.335 [2024-11-27 14:19:02.200306] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:31.335 [2024-11-27 14:19:02.200318] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.335 [2024-11-27 14:19:02.200840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.335 [2024-11-27 14:19:02.200864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:31.336 [2024-11-27 14:19:02.200959] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:31.336 [2024-11-27 14:19:02.200991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:31.336 [2024-11-27 14:19:02.201130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:31.336 [2024-11-27 14:19:02.201154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:31.336 [2024-11-27 14:19:02.201412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:31.336 [2024-11-27 14:19:02.207037] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:31.336 [2024-11-27 14:19:02.207097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:31.336 [2024-11-27 14:19:02.207491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.336 pt3 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.336 "name": "raid_bdev1", 00:19:31.336 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:31.336 "strip_size_kb": 64, 00:19:31.336 "state": "online", 00:19:31.336 "raid_level": "raid5f", 00:19:31.336 "superblock": true, 00:19:31.336 "num_base_bdevs": 3, 00:19:31.336 "num_base_bdevs_discovered": 2, 00:19:31.336 "num_base_bdevs_operational": 2, 00:19:31.336 "base_bdevs_list": [ 00:19:31.336 { 00:19:31.336 "name": null, 00:19:31.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.336 "is_configured": false, 00:19:31.336 "data_offset": 2048, 00:19:31.336 "data_size": 63488 00:19:31.336 }, 00:19:31.336 { 00:19:31.336 "name": "pt2", 00:19:31.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.336 "is_configured": true, 00:19:31.336 "data_offset": 2048, 00:19:31.336 "data_size": 63488 00:19:31.336 }, 00:19:31.336 { 00:19:31.336 "name": "pt3", 00:19:31.336 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:31.336 "is_configured": true, 00:19:31.336 "data_offset": 2048, 00:19:31.336 "data_size": 63488 00:19:31.336 } 00:19:31.336 ] 00:19:31.336 }' 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.336 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.904 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:31.904 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.904 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.905 [2024-11-27 14:19:02.675191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.905 [2024-11-27 14:19:02.675226] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.905 [2024-11-27 14:19:02.675313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.905 [2024-11-27 14:19:02.675396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.905 [2024-11-27 14:19:02.675408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.905 [2024-11-27 14:19:02.735086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:31.905 [2024-11-27 14:19:02.735210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.905 [2024-11-27 14:19:02.735253] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:31.905 [2024-11-27 14:19:02.735306] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.905 [2024-11-27 14:19:02.737915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.905 [2024-11-27 14:19:02.737998] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:31.905 [2024-11-27 14:19:02.738154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:31.905 [2024-11-27 14:19:02.738237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:31.905 [2024-11-27 14:19:02.738468] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:31.905 [2024-11-27 14:19:02.738537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:31.905 [2024-11-27 14:19:02.738586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:31.905 [2024-11-27 14:19:02.738712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:31.905 pt1 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.905 "name": "raid_bdev1", 00:19:31.905 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:31.905 "strip_size_kb": 64, 00:19:31.905 "state": "configuring", 00:19:31.905 "raid_level": "raid5f", 00:19:31.905 "superblock": true, 00:19:31.905 "num_base_bdevs": 3, 00:19:31.905 "num_base_bdevs_discovered": 1, 00:19:31.905 "num_base_bdevs_operational": 2, 00:19:31.905 "base_bdevs_list": [ 00:19:31.905 { 00:19:31.905 "name": null, 00:19:31.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.905 "is_configured": false, 00:19:31.905 "data_offset": 2048, 00:19:31.905 "data_size": 63488 00:19:31.905 }, 00:19:31.905 { 00:19:31.905 "name": "pt2", 00:19:31.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.905 "is_configured": true, 00:19:31.905 "data_offset": 2048, 00:19:31.905 "data_size": 63488 00:19:31.905 }, 00:19:31.905 { 00:19:31.905 "name": null, 00:19:31.905 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:31.905 "is_configured": false, 00:19:31.905 "data_offset": 2048, 00:19:31.905 "data_size": 63488 00:19:31.905 } 00:19:31.905 ] 00:19:31.905 }' 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.905 14:19:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.475 [2024-11-27 14:19:03.274224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:32.475 [2024-11-27 14:19:03.274293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.475 [2024-11-27 14:19:03.274317] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:32.475 [2024-11-27 14:19:03.274329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.475 [2024-11-27 14:19:03.274858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.475 [2024-11-27 14:19:03.274879] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:32.475 [2024-11-27 14:19:03.274969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:32.475 [2024-11-27 14:19:03.274995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:32.475 [2024-11-27 14:19:03.275150] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:32.475 [2024-11-27 14:19:03.275161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:32.475 [2024-11-27 14:19:03.275438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:32.475 [2024-11-27 14:19:03.282049] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:32.475 [2024-11-27 14:19:03.282080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:32.475 [2024-11-27 14:19:03.282355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.475 pt3 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.475 "name": "raid_bdev1", 00:19:32.475 "uuid": "cf1d093e-b96b-4a6b-b4d5-a02acb14a709", 00:19:32.475 "strip_size_kb": 64, 00:19:32.475 "state": "online", 00:19:32.475 "raid_level": "raid5f", 00:19:32.475 "superblock": true, 00:19:32.475 "num_base_bdevs": 3, 00:19:32.475 "num_base_bdevs_discovered": 2, 00:19:32.475 "num_base_bdevs_operational": 2, 00:19:32.475 "base_bdevs_list": [ 00:19:32.475 { 00:19:32.475 "name": null, 00:19:32.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.475 "is_configured": false, 00:19:32.475 "data_offset": 2048, 00:19:32.475 "data_size": 63488 00:19:32.475 }, 00:19:32.475 { 00:19:32.475 "name": "pt2", 00:19:32.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:32.475 "is_configured": true, 00:19:32.475 "data_offset": 2048, 00:19:32.475 "data_size": 63488 00:19:32.475 }, 00:19:32.475 { 00:19:32.475 "name": "pt3", 00:19:32.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:32.475 "is_configured": true, 00:19:32.475 "data_offset": 2048, 00:19:32.475 "data_size": 63488 00:19:32.475 } 00:19:32.475 ] 00:19:32.475 }' 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.475 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.043 [2024-11-27 14:19:03.789412] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' cf1d093e-b96b-4a6b-b4d5-a02acb14a709 '!=' cf1d093e-b96b-4a6b-b4d5-a02acb14a709 ']' 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81355 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81355 ']' 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81355 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81355 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.043 killing process with pid 81355 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81355' 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81355 00:19:33.043 [2024-11-27 14:19:03.853559] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.043 14:19:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81355 00:19:33.043 [2024-11-27 14:19:03.853670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.043 [2024-11-27 14:19:03.853751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.043 [2024-11-27 14:19:03.853766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:33.301 [2024-11-27 14:19:04.176559] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.678 14:19:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:34.678 00:19:34.678 real 0m7.989s 00:19:34.678 user 0m12.455s 00:19:34.678 sys 0m1.428s 00:19:34.678 14:19:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.678 14:19:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.678 ************************************ 00:19:34.678 END TEST raid5f_superblock_test 00:19:34.678 ************************************ 00:19:34.678 14:19:05 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:34.678 14:19:05 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:19:34.678 14:19:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:34.678 14:19:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.678 14:19:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.678 ************************************ 00:19:34.678 START TEST raid5f_rebuild_test 00:19:34.678 ************************************ 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81800 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81800 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81800 ']' 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.678 14:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.678 [2024-11-27 14:19:05.535053] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:34.678 [2024-11-27 14:19:05.535678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81800 ] 00:19:34.678 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:34.678 Zero copy mechanism will not be used. 00:19:34.938 [2024-11-27 14:19:05.719210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.938 [2024-11-27 14:19:05.851004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.197 [2024-11-27 14:19:06.056074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.197 [2024-11-27 14:19:06.056150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.455 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.455 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:19:35.455 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:35.455 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:35.455 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.455 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.714 BaseBdev1_malloc 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.714 [2024-11-27 14:19:06.439179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:35.714 [2024-11-27 14:19:06.439272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.714 [2024-11-27 14:19:06.439300] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:35.714 [2024-11-27 14:19:06.439313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.714 [2024-11-27 14:19:06.441781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.714 [2024-11-27 14:19:06.441824] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:35.714 BaseBdev1 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.714 BaseBdev2_malloc 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.714 [2024-11-27 14:19:06.495682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:35.714 [2024-11-27 14:19:06.495826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.714 [2024-11-27 14:19:06.495858] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:35.714 [2024-11-27 14:19:06.495881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.714 [2024-11-27 14:19:06.498233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.714 [2024-11-27 14:19:06.498273] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:35.714 BaseBdev2 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:35.714 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.715 BaseBdev3_malloc 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.715 [2024-11-27 14:19:06.574266] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:35.715 [2024-11-27 14:19:06.574375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.715 [2024-11-27 14:19:06.574420] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:35.715 [2024-11-27 14:19:06.574433] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.715 [2024-11-27 14:19:06.577311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.715 [2024-11-27 14:19:06.577367] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:35.715 BaseBdev3 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.715 spare_malloc 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.715 spare_delay 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.715 [2024-11-27 14:19:06.647096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:35.715 [2024-11-27 14:19:06.647188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.715 [2024-11-27 14:19:06.647215] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:35.715 [2024-11-27 14:19:06.647227] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.715 [2024-11-27 14:19:06.649937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.715 [2024-11-27 14:19:06.650004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:35.715 spare 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.715 [2024-11-27 14:19:06.655174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.715 [2024-11-27 14:19:06.657319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:35.715 [2024-11-27 14:19:06.657391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:35.715 [2024-11-27 14:19:06.657500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:35.715 [2024-11-27 14:19:06.657521] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:35.715 [2024-11-27 14:19:06.657852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:35.715 [2024-11-27 14:19:06.664810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:35.715 [2024-11-27 14:19:06.664887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:35.715 [2024-11-27 14:19:06.665214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:35.715 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.974 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.974 "name": "raid_bdev1", 00:19:35.974 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:35.974 "strip_size_kb": 64, 00:19:35.974 "state": "online", 00:19:35.974 "raid_level": "raid5f", 00:19:35.974 "superblock": false, 00:19:35.974 "num_base_bdevs": 3, 00:19:35.974 "num_base_bdevs_discovered": 3, 00:19:35.974 "num_base_bdevs_operational": 3, 00:19:35.975 "base_bdevs_list": [ 00:19:35.975 { 00:19:35.975 "name": "BaseBdev1", 00:19:35.975 "uuid": "2d8c227f-a5db-5f68-8e88-f7efc84c5be6", 00:19:35.975 "is_configured": true, 00:19:35.975 "data_offset": 0, 00:19:35.975 "data_size": 65536 00:19:35.975 }, 00:19:35.975 { 00:19:35.975 "name": "BaseBdev2", 00:19:35.975 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:35.975 "is_configured": true, 00:19:35.975 "data_offset": 0, 00:19:35.975 "data_size": 65536 00:19:35.975 }, 00:19:35.975 { 00:19:35.975 "name": "BaseBdev3", 00:19:35.975 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:35.975 "is_configured": true, 00:19:35.975 "data_offset": 0, 00:19:35.975 "data_size": 65536 00:19:35.975 } 00:19:35.975 ] 00:19:35.975 }' 00:19:35.975 14:19:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.975 14:19:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.233 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:36.233 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:36.233 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.233 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.234 [2024-11-27 14:19:07.100336] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.234 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.234 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:19:36.234 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.234 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.234 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.234 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:36.234 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:36.492 [2024-11-27 14:19:07.387642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:36.492 /dev/nbd0 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:36.492 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:36.750 1+0 records in 00:19:36.750 1+0 records out 00:19:36.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554145 s, 7.4 MB/s 00:19:36.750 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.750 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:36.750 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:36.750 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:36.750 14:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:36.750 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:36.750 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:36.750 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:36.750 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:36.750 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:36.750 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:19:37.008 512+0 records in 00:19:37.008 512+0 records out 00:19:37.009 67108864 bytes (67 MB, 64 MiB) copied, 0.403385 s, 166 MB/s 00:19:37.009 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:37.009 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:37.009 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:37.009 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:37.009 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:37.009 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:37.009 14:19:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:37.268 [2024-11-27 14:19:08.116458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.268 [2024-11-27 14:19:08.133248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.268 "name": "raid_bdev1", 00:19:37.268 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:37.268 "strip_size_kb": 64, 00:19:37.268 "state": "online", 00:19:37.268 "raid_level": "raid5f", 00:19:37.268 "superblock": false, 00:19:37.268 "num_base_bdevs": 3, 00:19:37.268 "num_base_bdevs_discovered": 2, 00:19:37.268 "num_base_bdevs_operational": 2, 00:19:37.268 "base_bdevs_list": [ 00:19:37.268 { 00:19:37.268 "name": null, 00:19:37.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.268 "is_configured": false, 00:19:37.268 "data_offset": 0, 00:19:37.268 "data_size": 65536 00:19:37.268 }, 00:19:37.268 { 00:19:37.268 "name": "BaseBdev2", 00:19:37.268 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:37.268 "is_configured": true, 00:19:37.268 "data_offset": 0, 00:19:37.268 "data_size": 65536 00:19:37.268 }, 00:19:37.268 { 00:19:37.268 "name": "BaseBdev3", 00:19:37.268 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:37.268 "is_configured": true, 00:19:37.268 "data_offset": 0, 00:19:37.268 "data_size": 65536 00:19:37.268 } 00:19:37.268 ] 00:19:37.268 }' 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.268 14:19:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.839 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:37.839 14:19:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.840 14:19:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.840 [2024-11-27 14:19:08.600447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:37.840 [2024-11-27 14:19:08.619340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:19:37.840 14:19:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.840 14:19:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:37.840 [2024-11-27 14:19:08.628777] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.780 "name": "raid_bdev1", 00:19:38.780 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:38.780 "strip_size_kb": 64, 00:19:38.780 "state": "online", 00:19:38.780 "raid_level": "raid5f", 00:19:38.780 "superblock": false, 00:19:38.780 "num_base_bdevs": 3, 00:19:38.780 "num_base_bdevs_discovered": 3, 00:19:38.780 "num_base_bdevs_operational": 3, 00:19:38.780 "process": { 00:19:38.780 "type": "rebuild", 00:19:38.780 "target": "spare", 00:19:38.780 "progress": { 00:19:38.780 "blocks": 20480, 00:19:38.780 "percent": 15 00:19:38.780 } 00:19:38.780 }, 00:19:38.780 "base_bdevs_list": [ 00:19:38.780 { 00:19:38.780 "name": "spare", 00:19:38.780 "uuid": "e47672a2-591e-5b30-8666-e6639d428abd", 00:19:38.780 "is_configured": true, 00:19:38.780 "data_offset": 0, 00:19:38.780 "data_size": 65536 00:19:38.780 }, 00:19:38.780 { 00:19:38.780 "name": "BaseBdev2", 00:19:38.780 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:38.780 "is_configured": true, 00:19:38.780 "data_offset": 0, 00:19:38.780 "data_size": 65536 00:19:38.780 }, 00:19:38.780 { 00:19:38.780 "name": "BaseBdev3", 00:19:38.780 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:38.780 "is_configured": true, 00:19:38.780 "data_offset": 0, 00:19:38.780 "data_size": 65536 00:19:38.780 } 00:19:38.780 ] 00:19:38.780 }' 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:38.780 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.039 [2024-11-27 14:19:09.763939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:39.039 [2024-11-27 14:19:09.839597] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:39.039 [2024-11-27 14:19:09.839683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:39.039 [2024-11-27 14:19:09.839705] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:39.039 [2024-11-27 14:19:09.839714] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.039 "name": "raid_bdev1", 00:19:39.039 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:39.039 "strip_size_kb": 64, 00:19:39.039 "state": "online", 00:19:39.039 "raid_level": "raid5f", 00:19:39.039 "superblock": false, 00:19:39.039 "num_base_bdevs": 3, 00:19:39.039 "num_base_bdevs_discovered": 2, 00:19:39.039 "num_base_bdevs_operational": 2, 00:19:39.039 "base_bdevs_list": [ 00:19:39.039 { 00:19:39.039 "name": null, 00:19:39.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.039 "is_configured": false, 00:19:39.039 "data_offset": 0, 00:19:39.039 "data_size": 65536 00:19:39.039 }, 00:19:39.039 { 00:19:39.039 "name": "BaseBdev2", 00:19:39.039 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:39.039 "is_configured": true, 00:19:39.039 "data_offset": 0, 00:19:39.039 "data_size": 65536 00:19:39.039 }, 00:19:39.039 { 00:19:39.039 "name": "BaseBdev3", 00:19:39.039 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:39.039 "is_configured": true, 00:19:39.039 "data_offset": 0, 00:19:39.039 "data_size": 65536 00:19:39.039 } 00:19:39.039 ] 00:19:39.039 }' 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.039 14:19:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.608 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.609 "name": "raid_bdev1", 00:19:39.609 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:39.609 "strip_size_kb": 64, 00:19:39.609 "state": "online", 00:19:39.609 "raid_level": "raid5f", 00:19:39.609 "superblock": false, 00:19:39.609 "num_base_bdevs": 3, 00:19:39.609 "num_base_bdevs_discovered": 2, 00:19:39.609 "num_base_bdevs_operational": 2, 00:19:39.609 "base_bdevs_list": [ 00:19:39.609 { 00:19:39.609 "name": null, 00:19:39.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.609 "is_configured": false, 00:19:39.609 "data_offset": 0, 00:19:39.609 "data_size": 65536 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "name": "BaseBdev2", 00:19:39.609 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:39.609 "is_configured": true, 00:19:39.609 "data_offset": 0, 00:19:39.609 "data_size": 65536 00:19:39.609 }, 00:19:39.609 { 00:19:39.609 "name": "BaseBdev3", 00:19:39.609 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:39.609 "is_configured": true, 00:19:39.609 "data_offset": 0, 00:19:39.609 "data_size": 65536 00:19:39.609 } 00:19:39.609 ] 00:19:39.609 }' 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.609 [2024-11-27 14:19:10.503414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:39.609 [2024-11-27 14:19:10.521993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.609 14:19:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:39.609 [2024-11-27 14:19:10.531075] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.006 "name": "raid_bdev1", 00:19:41.006 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:41.006 "strip_size_kb": 64, 00:19:41.006 "state": "online", 00:19:41.006 "raid_level": "raid5f", 00:19:41.006 "superblock": false, 00:19:41.006 "num_base_bdevs": 3, 00:19:41.006 "num_base_bdevs_discovered": 3, 00:19:41.006 "num_base_bdevs_operational": 3, 00:19:41.006 "process": { 00:19:41.006 "type": "rebuild", 00:19:41.006 "target": "spare", 00:19:41.006 "progress": { 00:19:41.006 "blocks": 20480, 00:19:41.006 "percent": 15 00:19:41.006 } 00:19:41.006 }, 00:19:41.006 "base_bdevs_list": [ 00:19:41.006 { 00:19:41.006 "name": "spare", 00:19:41.006 "uuid": "e47672a2-591e-5b30-8666-e6639d428abd", 00:19:41.006 "is_configured": true, 00:19:41.006 "data_offset": 0, 00:19:41.006 "data_size": 65536 00:19:41.006 }, 00:19:41.006 { 00:19:41.006 "name": "BaseBdev2", 00:19:41.006 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:41.006 "is_configured": true, 00:19:41.006 "data_offset": 0, 00:19:41.006 "data_size": 65536 00:19:41.006 }, 00:19:41.006 { 00:19:41.006 "name": "BaseBdev3", 00:19:41.006 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:41.006 "is_configured": true, 00:19:41.006 "data_offset": 0, 00:19:41.006 "data_size": 65536 00:19:41.006 } 00:19:41.006 ] 00:19:41.006 }' 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=559 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.006 "name": "raid_bdev1", 00:19:41.006 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:41.006 "strip_size_kb": 64, 00:19:41.006 "state": "online", 00:19:41.006 "raid_level": "raid5f", 00:19:41.006 "superblock": false, 00:19:41.006 "num_base_bdevs": 3, 00:19:41.006 "num_base_bdevs_discovered": 3, 00:19:41.006 "num_base_bdevs_operational": 3, 00:19:41.006 "process": { 00:19:41.006 "type": "rebuild", 00:19:41.006 "target": "spare", 00:19:41.006 "progress": { 00:19:41.006 "blocks": 22528, 00:19:41.006 "percent": 17 00:19:41.006 } 00:19:41.006 }, 00:19:41.006 "base_bdevs_list": [ 00:19:41.006 { 00:19:41.006 "name": "spare", 00:19:41.006 "uuid": "e47672a2-591e-5b30-8666-e6639d428abd", 00:19:41.006 "is_configured": true, 00:19:41.006 "data_offset": 0, 00:19:41.006 "data_size": 65536 00:19:41.006 }, 00:19:41.006 { 00:19:41.006 "name": "BaseBdev2", 00:19:41.006 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:41.006 "is_configured": true, 00:19:41.006 "data_offset": 0, 00:19:41.006 "data_size": 65536 00:19:41.006 }, 00:19:41.006 { 00:19:41.006 "name": "BaseBdev3", 00:19:41.006 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:41.006 "is_configured": true, 00:19:41.006 "data_offset": 0, 00:19:41.006 "data_size": 65536 00:19:41.006 } 00:19:41.006 ] 00:19:41.006 }' 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.006 14:19:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:41.945 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:41.945 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.945 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.945 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.945 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.945 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.945 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.945 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.945 14:19:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.946 14:19:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.946 14:19:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.946 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.946 "name": "raid_bdev1", 00:19:41.946 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:41.946 "strip_size_kb": 64, 00:19:41.946 "state": "online", 00:19:41.946 "raid_level": "raid5f", 00:19:41.946 "superblock": false, 00:19:41.946 "num_base_bdevs": 3, 00:19:41.946 "num_base_bdevs_discovered": 3, 00:19:41.946 "num_base_bdevs_operational": 3, 00:19:41.946 "process": { 00:19:41.946 "type": "rebuild", 00:19:41.946 "target": "spare", 00:19:41.946 "progress": { 00:19:41.946 "blocks": 45056, 00:19:41.946 "percent": 34 00:19:41.946 } 00:19:41.946 }, 00:19:41.946 "base_bdevs_list": [ 00:19:41.946 { 00:19:41.946 "name": "spare", 00:19:41.946 "uuid": "e47672a2-591e-5b30-8666-e6639d428abd", 00:19:41.946 "is_configured": true, 00:19:41.946 "data_offset": 0, 00:19:41.946 "data_size": 65536 00:19:41.946 }, 00:19:41.946 { 00:19:41.946 "name": "BaseBdev2", 00:19:41.946 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:41.946 "is_configured": true, 00:19:41.946 "data_offset": 0, 00:19:41.946 "data_size": 65536 00:19:41.946 }, 00:19:41.946 { 00:19:41.946 "name": "BaseBdev3", 00:19:41.946 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:41.946 "is_configured": true, 00:19:41.946 "data_offset": 0, 00:19:41.946 "data_size": 65536 00:19:41.946 } 00:19:41.946 ] 00:19:41.946 }' 00:19:41.946 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:42.205 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:42.205 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:42.205 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.205 14:19:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:43.142 14:19:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:43.142 14:19:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.142 14:19:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.142 14:19:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.142 14:19:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.142 14:19:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.142 14:19:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.142 14:19:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.142 14:19:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.142 14:19:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.142 14:19:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.142 14:19:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.142 "name": "raid_bdev1", 00:19:43.142 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:43.142 "strip_size_kb": 64, 00:19:43.142 "state": "online", 00:19:43.142 "raid_level": "raid5f", 00:19:43.142 "superblock": false, 00:19:43.142 "num_base_bdevs": 3, 00:19:43.142 "num_base_bdevs_discovered": 3, 00:19:43.142 "num_base_bdevs_operational": 3, 00:19:43.142 "process": { 00:19:43.142 "type": "rebuild", 00:19:43.142 "target": "spare", 00:19:43.142 "progress": { 00:19:43.142 "blocks": 69632, 00:19:43.142 "percent": 53 00:19:43.142 } 00:19:43.142 }, 00:19:43.142 "base_bdevs_list": [ 00:19:43.142 { 00:19:43.142 "name": "spare", 00:19:43.142 "uuid": "e47672a2-591e-5b30-8666-e6639d428abd", 00:19:43.142 "is_configured": true, 00:19:43.142 "data_offset": 0, 00:19:43.142 "data_size": 65536 00:19:43.142 }, 00:19:43.142 { 00:19:43.142 "name": "BaseBdev2", 00:19:43.142 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:43.142 "is_configured": true, 00:19:43.142 "data_offset": 0, 00:19:43.142 "data_size": 65536 00:19:43.142 }, 00:19:43.142 { 00:19:43.142 "name": "BaseBdev3", 00:19:43.142 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:43.142 "is_configured": true, 00:19:43.142 "data_offset": 0, 00:19:43.142 "data_size": 65536 00:19:43.142 } 00:19:43.142 ] 00:19:43.142 }' 00:19:43.142 14:19:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.142 14:19:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.142 14:19:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.400 14:19:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.400 14:19:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.337 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.337 "name": "raid_bdev1", 00:19:44.337 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:44.337 "strip_size_kb": 64, 00:19:44.337 "state": "online", 00:19:44.337 "raid_level": "raid5f", 00:19:44.337 "superblock": false, 00:19:44.337 "num_base_bdevs": 3, 00:19:44.337 "num_base_bdevs_discovered": 3, 00:19:44.337 "num_base_bdevs_operational": 3, 00:19:44.337 "process": { 00:19:44.337 "type": "rebuild", 00:19:44.337 "target": "spare", 00:19:44.337 "progress": { 00:19:44.337 "blocks": 92160, 00:19:44.337 "percent": 70 00:19:44.337 } 00:19:44.337 }, 00:19:44.337 "base_bdevs_list": [ 00:19:44.337 { 00:19:44.337 "name": "spare", 00:19:44.337 "uuid": "e47672a2-591e-5b30-8666-e6639d428abd", 00:19:44.338 "is_configured": true, 00:19:44.338 "data_offset": 0, 00:19:44.338 "data_size": 65536 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "name": "BaseBdev2", 00:19:44.338 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:44.338 "is_configured": true, 00:19:44.338 "data_offset": 0, 00:19:44.338 "data_size": 65536 00:19:44.338 }, 00:19:44.338 { 00:19:44.338 "name": "BaseBdev3", 00:19:44.338 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:44.338 "is_configured": true, 00:19:44.338 "data_offset": 0, 00:19:44.338 "data_size": 65536 00:19:44.338 } 00:19:44.338 ] 00:19:44.338 }' 00:19:44.338 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.338 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.338 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.338 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.338 14:19:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.719 "name": "raid_bdev1", 00:19:45.719 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:45.719 "strip_size_kb": 64, 00:19:45.719 "state": "online", 00:19:45.719 "raid_level": "raid5f", 00:19:45.719 "superblock": false, 00:19:45.719 "num_base_bdevs": 3, 00:19:45.719 "num_base_bdevs_discovered": 3, 00:19:45.719 "num_base_bdevs_operational": 3, 00:19:45.719 "process": { 00:19:45.719 "type": "rebuild", 00:19:45.719 "target": "spare", 00:19:45.719 "progress": { 00:19:45.719 "blocks": 116736, 00:19:45.719 "percent": 89 00:19:45.719 } 00:19:45.719 }, 00:19:45.719 "base_bdevs_list": [ 00:19:45.719 { 00:19:45.719 "name": "spare", 00:19:45.719 "uuid": "e47672a2-591e-5b30-8666-e6639d428abd", 00:19:45.719 "is_configured": true, 00:19:45.719 "data_offset": 0, 00:19:45.719 "data_size": 65536 00:19:45.719 }, 00:19:45.719 { 00:19:45.719 "name": "BaseBdev2", 00:19:45.719 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:45.719 "is_configured": true, 00:19:45.719 "data_offset": 0, 00:19:45.719 "data_size": 65536 00:19:45.719 }, 00:19:45.719 { 00:19:45.719 "name": "BaseBdev3", 00:19:45.719 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:45.719 "is_configured": true, 00:19:45.719 "data_offset": 0, 00:19:45.719 "data_size": 65536 00:19:45.719 } 00:19:45.719 ] 00:19:45.719 }' 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.719 14:19:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.289 [2024-11-27 14:19:16.994089] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:46.289 [2024-11-27 14:19:16.994211] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:46.289 [2024-11-27 14:19:16.994266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.549 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:46.549 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.549 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.549 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:46.549 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:46.550 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.550 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.550 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.550 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.550 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.550 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.550 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.550 "name": "raid_bdev1", 00:19:46.550 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:46.550 "strip_size_kb": 64, 00:19:46.550 "state": "online", 00:19:46.550 "raid_level": "raid5f", 00:19:46.550 "superblock": false, 00:19:46.550 "num_base_bdevs": 3, 00:19:46.550 "num_base_bdevs_discovered": 3, 00:19:46.550 "num_base_bdevs_operational": 3, 00:19:46.550 "base_bdevs_list": [ 00:19:46.550 { 00:19:46.550 "name": "spare", 00:19:46.550 "uuid": "e47672a2-591e-5b30-8666-e6639d428abd", 00:19:46.550 "is_configured": true, 00:19:46.550 "data_offset": 0, 00:19:46.550 "data_size": 65536 00:19:46.550 }, 00:19:46.550 { 00:19:46.550 "name": "BaseBdev2", 00:19:46.550 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:46.550 "is_configured": true, 00:19:46.550 "data_offset": 0, 00:19:46.550 "data_size": 65536 00:19:46.550 }, 00:19:46.550 { 00:19:46.550 "name": "BaseBdev3", 00:19:46.550 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:46.550 "is_configured": true, 00:19:46.550 "data_offset": 0, 00:19:46.550 "data_size": 65536 00:19:46.550 } 00:19:46.550 ] 00:19:46.550 }' 00:19:46.550 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.810 "name": "raid_bdev1", 00:19:46.810 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:46.810 "strip_size_kb": 64, 00:19:46.810 "state": "online", 00:19:46.810 "raid_level": "raid5f", 00:19:46.810 "superblock": false, 00:19:46.810 "num_base_bdevs": 3, 00:19:46.810 "num_base_bdevs_discovered": 3, 00:19:46.810 "num_base_bdevs_operational": 3, 00:19:46.810 "base_bdevs_list": [ 00:19:46.810 { 00:19:46.810 "name": "spare", 00:19:46.810 "uuid": "e47672a2-591e-5b30-8666-e6639d428abd", 00:19:46.810 "is_configured": true, 00:19:46.810 "data_offset": 0, 00:19:46.810 "data_size": 65536 00:19:46.810 }, 00:19:46.810 { 00:19:46.810 "name": "BaseBdev2", 00:19:46.810 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:46.810 "is_configured": true, 00:19:46.810 "data_offset": 0, 00:19:46.810 "data_size": 65536 00:19:46.810 }, 00:19:46.810 { 00:19:46.810 "name": "BaseBdev3", 00:19:46.810 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:46.810 "is_configured": true, 00:19:46.810 "data_offset": 0, 00:19:46.810 "data_size": 65536 00:19:46.810 } 00:19:46.810 ] 00:19:46.810 }' 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:46.810 "name": "raid_bdev1", 00:19:46.810 "uuid": "10aae7a4-871d-4ac0-9d75-f02513579579", 00:19:46.810 "strip_size_kb": 64, 00:19:46.810 "state": "online", 00:19:46.810 "raid_level": "raid5f", 00:19:46.810 "superblock": false, 00:19:46.810 "num_base_bdevs": 3, 00:19:46.810 "num_base_bdevs_discovered": 3, 00:19:46.810 "num_base_bdevs_operational": 3, 00:19:46.810 "base_bdevs_list": [ 00:19:46.810 { 00:19:46.810 "name": "spare", 00:19:46.810 "uuid": "e47672a2-591e-5b30-8666-e6639d428abd", 00:19:46.810 "is_configured": true, 00:19:46.810 "data_offset": 0, 00:19:46.810 "data_size": 65536 00:19:46.810 }, 00:19:46.810 { 00:19:46.810 "name": "BaseBdev2", 00:19:46.810 "uuid": "b7e33039-bb16-5000-ad09-72b6f2be2958", 00:19:46.810 "is_configured": true, 00:19:46.810 "data_offset": 0, 00:19:46.810 "data_size": 65536 00:19:46.810 }, 00:19:46.810 { 00:19:46.810 "name": "BaseBdev3", 00:19:46.810 "uuid": "7cc531a7-aeb9-590b-a49b-de757e01fc55", 00:19:46.810 "is_configured": true, 00:19:46.810 "data_offset": 0, 00:19:46.810 "data_size": 65536 00:19:46.810 } 00:19:46.810 ] 00:19:46.810 }' 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:46.810 14:19:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.380 [2024-11-27 14:19:18.139819] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:47.380 [2024-11-27 14:19:18.139945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:47.380 [2024-11-27 14:19:18.140080] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:47.380 [2024-11-27 14:19:18.140243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:47.380 [2024-11-27 14:19:18.140313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:47.380 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:47.640 /dev/nbd0 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:47.640 1+0 records in 00:19:47.640 1+0 records out 00:19:47.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363257 s, 11.3 MB/s 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:47.640 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:47.901 /dev/nbd1 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:47.901 1+0 records in 00:19:47.901 1+0 records out 00:19:47.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440961 s, 9.3 MB/s 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:47.901 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:48.160 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:48.160 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:48.160 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:48.160 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:48.160 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:48.160 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.160 14:19:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:48.421 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:48.422 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:48.422 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:48.422 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.422 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.422 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:48.422 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:48.422 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.422 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:48.422 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81800 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81800 ']' 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81800 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81800 00:19:48.684 killing process with pid 81800 00:19:48.684 Received shutdown signal, test time was about 60.000000 seconds 00:19:48.684 00:19:48.684 Latency(us) 00:19:48.684 [2024-11-27T14:19:19.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.684 [2024-11-27T14:19:19.640Z] =================================================================================================================== 00:19:48.684 [2024-11-27T14:19:19.640Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81800' 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81800 00:19:48.684 [2024-11-27 14:19:19.498662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:48.684 14:19:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81800 00:19:49.260 [2024-11-27 14:19:19.914247] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:50.200 14:19:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:50.200 00:19:50.200 real 0m15.677s 00:19:50.200 user 0m19.249s 00:19:50.200 sys 0m2.130s 00:19:50.200 14:19:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.200 14:19:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.200 ************************************ 00:19:50.200 END TEST raid5f_rebuild_test 00:19:50.200 ************************************ 00:19:50.461 14:19:21 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:19:50.461 14:19:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:50.461 14:19:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.461 14:19:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.461 ************************************ 00:19:50.461 START TEST raid5f_rebuild_test_sb 00:19:50.461 ************************************ 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82246 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82246 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82246 ']' 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.461 14:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.461 [2024-11-27 14:19:21.292793] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:19:50.461 [2024-11-27 14:19:21.293018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82246 ] 00:19:50.461 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:50.461 Zero copy mechanism will not be used. 00:19:50.721 [2024-11-27 14:19:21.469695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.721 [2024-11-27 14:19:21.597105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.981 [2024-11-27 14:19:21.823022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:50.981 [2024-11-27 14:19:21.823065] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:51.241 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.241 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:51.241 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:51.241 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:51.241 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.241 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.501 BaseBdev1_malloc 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.501 [2024-11-27 14:19:22.216779] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:51.501 [2024-11-27 14:19:22.216869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.501 [2024-11-27 14:19:22.216897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:51.501 [2024-11-27 14:19:22.216911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.501 [2024-11-27 14:19:22.219444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.501 [2024-11-27 14:19:22.219494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:51.501 BaseBdev1 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.501 BaseBdev2_malloc 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.501 [2024-11-27 14:19:22.275426] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:51.501 [2024-11-27 14:19:22.275508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.501 [2024-11-27 14:19:22.275534] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:51.501 [2024-11-27 14:19:22.275546] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.501 [2024-11-27 14:19:22.278023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.501 [2024-11-27 14:19:22.278070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:51.501 BaseBdev2 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.501 BaseBdev3_malloc 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.501 [2024-11-27 14:19:22.348526] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:51.501 [2024-11-27 14:19:22.348678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.501 [2024-11-27 14:19:22.348768] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:51.501 [2024-11-27 14:19:22.348821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.501 [2024-11-27 14:19:22.351384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.501 [2024-11-27 14:19:22.351478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:51.501 BaseBdev3 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.501 spare_malloc 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.501 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.502 spare_delay 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.502 [2024-11-27 14:19:22.421293] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:51.502 [2024-11-27 14:19:22.421456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:51.502 [2024-11-27 14:19:22.421497] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:51.502 [2024-11-27 14:19:22.421548] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:51.502 [2024-11-27 14:19:22.424127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:51.502 [2024-11-27 14:19:22.424268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:51.502 spare 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.502 [2024-11-27 14:19:22.433364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:51.502 [2024-11-27 14:19:22.435449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:51.502 [2024-11-27 14:19:22.435520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:51.502 [2024-11-27 14:19:22.435719] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:51.502 [2024-11-27 14:19:22.435731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:51.502 [2024-11-27 14:19:22.436066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:51.502 [2024-11-27 14:19:22.442369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:51.502 [2024-11-27 14:19:22.442462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:51.502 [2024-11-27 14:19:22.442749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.502 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.762 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.762 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.762 "name": "raid_bdev1", 00:19:51.762 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:19:51.762 "strip_size_kb": 64, 00:19:51.762 "state": "online", 00:19:51.762 "raid_level": "raid5f", 00:19:51.762 "superblock": true, 00:19:51.762 "num_base_bdevs": 3, 00:19:51.762 "num_base_bdevs_discovered": 3, 00:19:51.762 "num_base_bdevs_operational": 3, 00:19:51.762 "base_bdevs_list": [ 00:19:51.762 { 00:19:51.762 "name": "BaseBdev1", 00:19:51.762 "uuid": "7cfb1f5d-4ab9-5e4c-80c9-b74ecb54ea4d", 00:19:51.762 "is_configured": true, 00:19:51.762 "data_offset": 2048, 00:19:51.762 "data_size": 63488 00:19:51.762 }, 00:19:51.762 { 00:19:51.762 "name": "BaseBdev2", 00:19:51.762 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:19:51.762 "is_configured": true, 00:19:51.762 "data_offset": 2048, 00:19:51.762 "data_size": 63488 00:19:51.762 }, 00:19:51.762 { 00:19:51.762 "name": "BaseBdev3", 00:19:51.762 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:19:51.762 "is_configured": true, 00:19:51.762 "data_offset": 2048, 00:19:51.762 "data_size": 63488 00:19:51.762 } 00:19:51.762 ] 00:19:51.762 }' 00:19:51.762 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.762 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.025 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:52.025 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:52.025 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.025 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.025 [2024-11-27 14:19:22.945376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:52.025 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.025 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:19:52.025 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.025 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.025 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.025 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:52.291 14:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:52.291 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:52.291 [2024-11-27 14:19:23.232696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:52.550 /dev/nbd0 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:52.550 1+0 records in 00:19:52.550 1+0 records out 00:19:52.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504954 s, 8.1 MB/s 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:52.550 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:19:52.809 496+0 records in 00:19:52.809 496+0 records out 00:19:52.809 65011712 bytes (65 MB, 62 MiB) copied, 0.40928 s, 159 MB/s 00:19:52.809 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:52.809 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:52.809 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:52.809 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:52.809 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:52.809 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:52.809 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:53.069 [2024-11-27 14:19:23.969487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.069 [2024-11-27 14:19:23.985855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.069 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.070 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.070 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.070 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.070 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.070 14:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.070 14:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.330 14:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.330 "name": "raid_bdev1", 00:19:53.330 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:19:53.330 "strip_size_kb": 64, 00:19:53.330 "state": "online", 00:19:53.330 "raid_level": "raid5f", 00:19:53.330 "superblock": true, 00:19:53.330 "num_base_bdevs": 3, 00:19:53.330 "num_base_bdevs_discovered": 2, 00:19:53.330 "num_base_bdevs_operational": 2, 00:19:53.330 "base_bdevs_list": [ 00:19:53.330 { 00:19:53.330 "name": null, 00:19:53.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.330 "is_configured": false, 00:19:53.330 "data_offset": 0, 00:19:53.330 "data_size": 63488 00:19:53.330 }, 00:19:53.330 { 00:19:53.330 "name": "BaseBdev2", 00:19:53.330 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:19:53.330 "is_configured": true, 00:19:53.330 "data_offset": 2048, 00:19:53.330 "data_size": 63488 00:19:53.330 }, 00:19:53.330 { 00:19:53.330 "name": "BaseBdev3", 00:19:53.330 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:19:53.330 "is_configured": true, 00:19:53.330 "data_offset": 2048, 00:19:53.330 "data_size": 63488 00:19:53.330 } 00:19:53.330 ] 00:19:53.330 }' 00:19:53.330 14:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.330 14:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.590 14:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:53.590 14:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.590 14:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.590 [2024-11-27 14:19:24.457152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.590 [2024-11-27 14:19:24.476484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:19:53.590 14:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.590 14:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:53.590 [2024-11-27 14:19:24.486732] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:54.971 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.971 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.971 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.971 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.971 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.971 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.971 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.971 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.971 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.971 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.971 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.971 "name": "raid_bdev1", 00:19:54.971 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:19:54.971 "strip_size_kb": 64, 00:19:54.971 "state": "online", 00:19:54.972 "raid_level": "raid5f", 00:19:54.972 "superblock": true, 00:19:54.972 "num_base_bdevs": 3, 00:19:54.972 "num_base_bdevs_discovered": 3, 00:19:54.972 "num_base_bdevs_operational": 3, 00:19:54.972 "process": { 00:19:54.972 "type": "rebuild", 00:19:54.972 "target": "spare", 00:19:54.972 "progress": { 00:19:54.972 "blocks": 20480, 00:19:54.972 "percent": 16 00:19:54.972 } 00:19:54.972 }, 00:19:54.972 "base_bdevs_list": [ 00:19:54.972 { 00:19:54.972 "name": "spare", 00:19:54.972 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:19:54.972 "is_configured": true, 00:19:54.972 "data_offset": 2048, 00:19:54.972 "data_size": 63488 00:19:54.972 }, 00:19:54.972 { 00:19:54.972 "name": "BaseBdev2", 00:19:54.972 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:19:54.972 "is_configured": true, 00:19:54.972 "data_offset": 2048, 00:19:54.972 "data_size": 63488 00:19:54.972 }, 00:19:54.972 { 00:19:54.972 "name": "BaseBdev3", 00:19:54.972 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:19:54.972 "is_configured": true, 00:19:54.972 "data_offset": 2048, 00:19:54.972 "data_size": 63488 00:19:54.972 } 00:19:54.972 ] 00:19:54.972 }' 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.972 [2024-11-27 14:19:25.635622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.972 [2024-11-27 14:19:25.699783] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:54.972 [2024-11-27 14:19:25.699995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.972 [2024-11-27 14:19:25.700050] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.972 [2024-11-27 14:19:25.700086] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.972 "name": "raid_bdev1", 00:19:54.972 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:19:54.972 "strip_size_kb": 64, 00:19:54.972 "state": "online", 00:19:54.972 "raid_level": "raid5f", 00:19:54.972 "superblock": true, 00:19:54.972 "num_base_bdevs": 3, 00:19:54.972 "num_base_bdevs_discovered": 2, 00:19:54.972 "num_base_bdevs_operational": 2, 00:19:54.972 "base_bdevs_list": [ 00:19:54.972 { 00:19:54.972 "name": null, 00:19:54.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.972 "is_configured": false, 00:19:54.972 "data_offset": 0, 00:19:54.972 "data_size": 63488 00:19:54.972 }, 00:19:54.972 { 00:19:54.972 "name": "BaseBdev2", 00:19:54.972 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:19:54.972 "is_configured": true, 00:19:54.972 "data_offset": 2048, 00:19:54.972 "data_size": 63488 00:19:54.972 }, 00:19:54.972 { 00:19:54.972 "name": "BaseBdev3", 00:19:54.972 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:19:54.972 "is_configured": true, 00:19:54.972 "data_offset": 2048, 00:19:54.972 "data_size": 63488 00:19:54.972 } 00:19:54.972 ] 00:19:54.972 }' 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.972 14:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.232 "name": "raid_bdev1", 00:19:55.232 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:19:55.232 "strip_size_kb": 64, 00:19:55.232 "state": "online", 00:19:55.232 "raid_level": "raid5f", 00:19:55.232 "superblock": true, 00:19:55.232 "num_base_bdevs": 3, 00:19:55.232 "num_base_bdevs_discovered": 2, 00:19:55.232 "num_base_bdevs_operational": 2, 00:19:55.232 "base_bdevs_list": [ 00:19:55.232 { 00:19:55.232 "name": null, 00:19:55.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.232 "is_configured": false, 00:19:55.232 "data_offset": 0, 00:19:55.232 "data_size": 63488 00:19:55.232 }, 00:19:55.232 { 00:19:55.232 "name": "BaseBdev2", 00:19:55.232 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:19:55.232 "is_configured": true, 00:19:55.232 "data_offset": 2048, 00:19:55.232 "data_size": 63488 00:19:55.232 }, 00:19:55.232 { 00:19:55.232 "name": "BaseBdev3", 00:19:55.232 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:19:55.232 "is_configured": true, 00:19:55.232 "data_offset": 2048, 00:19:55.232 "data_size": 63488 00:19:55.232 } 00:19:55.232 ] 00:19:55.232 }' 00:19:55.232 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.492 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:55.492 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.492 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:55.492 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:55.492 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.492 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.492 [2024-11-27 14:19:26.286712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:55.492 [2024-11-27 14:19:26.305706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:19:55.492 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.492 14:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:55.492 [2024-11-27 14:19:26.314811] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.439 "name": "raid_bdev1", 00:19:56.439 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:19:56.439 "strip_size_kb": 64, 00:19:56.439 "state": "online", 00:19:56.439 "raid_level": "raid5f", 00:19:56.439 "superblock": true, 00:19:56.439 "num_base_bdevs": 3, 00:19:56.439 "num_base_bdevs_discovered": 3, 00:19:56.439 "num_base_bdevs_operational": 3, 00:19:56.439 "process": { 00:19:56.439 "type": "rebuild", 00:19:56.439 "target": "spare", 00:19:56.439 "progress": { 00:19:56.439 "blocks": 20480, 00:19:56.439 "percent": 16 00:19:56.439 } 00:19:56.439 }, 00:19:56.439 "base_bdevs_list": [ 00:19:56.439 { 00:19:56.439 "name": "spare", 00:19:56.439 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:19:56.439 "is_configured": true, 00:19:56.439 "data_offset": 2048, 00:19:56.439 "data_size": 63488 00:19:56.439 }, 00:19:56.439 { 00:19:56.439 "name": "BaseBdev2", 00:19:56.439 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:19:56.439 "is_configured": true, 00:19:56.439 "data_offset": 2048, 00:19:56.439 "data_size": 63488 00:19:56.439 }, 00:19:56.439 { 00:19:56.439 "name": "BaseBdev3", 00:19:56.439 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:19:56.439 "is_configured": true, 00:19:56.439 "data_offset": 2048, 00:19:56.439 "data_size": 63488 00:19:56.439 } 00:19:56.439 ] 00:19:56.439 }' 00:19:56.439 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:56.716 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=575 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.716 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.716 "name": "raid_bdev1", 00:19:56.716 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:19:56.716 "strip_size_kb": 64, 00:19:56.716 "state": "online", 00:19:56.716 "raid_level": "raid5f", 00:19:56.716 "superblock": true, 00:19:56.716 "num_base_bdevs": 3, 00:19:56.716 "num_base_bdevs_discovered": 3, 00:19:56.716 "num_base_bdevs_operational": 3, 00:19:56.716 "process": { 00:19:56.716 "type": "rebuild", 00:19:56.716 "target": "spare", 00:19:56.716 "progress": { 00:19:56.716 "blocks": 22528, 00:19:56.716 "percent": 17 00:19:56.716 } 00:19:56.716 }, 00:19:56.716 "base_bdevs_list": [ 00:19:56.716 { 00:19:56.716 "name": "spare", 00:19:56.716 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:19:56.716 "is_configured": true, 00:19:56.716 "data_offset": 2048, 00:19:56.716 "data_size": 63488 00:19:56.716 }, 00:19:56.716 { 00:19:56.716 "name": "BaseBdev2", 00:19:56.716 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:19:56.716 "is_configured": true, 00:19:56.716 "data_offset": 2048, 00:19:56.716 "data_size": 63488 00:19:56.716 }, 00:19:56.716 { 00:19:56.716 "name": "BaseBdev3", 00:19:56.716 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:19:56.717 "is_configured": true, 00:19:56.717 "data_offset": 2048, 00:19:56.717 "data_size": 63488 00:19:56.717 } 00:19:56.717 ] 00:19:56.717 }' 00:19:56.717 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.717 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.717 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.717 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.717 14:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.098 "name": "raid_bdev1", 00:19:58.098 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:19:58.098 "strip_size_kb": 64, 00:19:58.098 "state": "online", 00:19:58.098 "raid_level": "raid5f", 00:19:58.098 "superblock": true, 00:19:58.098 "num_base_bdevs": 3, 00:19:58.098 "num_base_bdevs_discovered": 3, 00:19:58.098 "num_base_bdevs_operational": 3, 00:19:58.098 "process": { 00:19:58.098 "type": "rebuild", 00:19:58.098 "target": "spare", 00:19:58.098 "progress": { 00:19:58.098 "blocks": 45056, 00:19:58.098 "percent": 35 00:19:58.098 } 00:19:58.098 }, 00:19:58.098 "base_bdevs_list": [ 00:19:58.098 { 00:19:58.098 "name": "spare", 00:19:58.098 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:19:58.098 "is_configured": true, 00:19:58.098 "data_offset": 2048, 00:19:58.098 "data_size": 63488 00:19:58.098 }, 00:19:58.098 { 00:19:58.098 "name": "BaseBdev2", 00:19:58.098 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:19:58.098 "is_configured": true, 00:19:58.098 "data_offset": 2048, 00:19:58.098 "data_size": 63488 00:19:58.098 }, 00:19:58.098 { 00:19:58.098 "name": "BaseBdev3", 00:19:58.098 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:19:58.098 "is_configured": true, 00:19:58.098 "data_offset": 2048, 00:19:58.098 "data_size": 63488 00:19:58.098 } 00:19:58.098 ] 00:19:58.098 }' 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.098 14:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.037 "name": "raid_bdev1", 00:19:59.037 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:19:59.037 "strip_size_kb": 64, 00:19:59.037 "state": "online", 00:19:59.037 "raid_level": "raid5f", 00:19:59.037 "superblock": true, 00:19:59.037 "num_base_bdevs": 3, 00:19:59.037 "num_base_bdevs_discovered": 3, 00:19:59.037 "num_base_bdevs_operational": 3, 00:19:59.037 "process": { 00:19:59.037 "type": "rebuild", 00:19:59.037 "target": "spare", 00:19:59.037 "progress": { 00:19:59.037 "blocks": 69632, 00:19:59.037 "percent": 54 00:19:59.037 } 00:19:59.037 }, 00:19:59.037 "base_bdevs_list": [ 00:19:59.037 { 00:19:59.037 "name": "spare", 00:19:59.037 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:19:59.037 "is_configured": true, 00:19:59.037 "data_offset": 2048, 00:19:59.037 "data_size": 63488 00:19:59.037 }, 00:19:59.037 { 00:19:59.037 "name": "BaseBdev2", 00:19:59.037 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:19:59.037 "is_configured": true, 00:19:59.037 "data_offset": 2048, 00:19:59.037 "data_size": 63488 00:19:59.037 }, 00:19:59.037 { 00:19:59.037 "name": "BaseBdev3", 00:19:59.037 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:19:59.037 "is_configured": true, 00:19:59.037 "data_offset": 2048, 00:19:59.037 "data_size": 63488 00:19:59.037 } 00:19:59.037 ] 00:19:59.037 }' 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.037 14:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:59.977 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:59.977 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.977 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.978 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.978 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.978 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.978 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.978 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.978 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.978 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.238 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.238 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.238 "name": "raid_bdev1", 00:20:00.238 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:00.238 "strip_size_kb": 64, 00:20:00.238 "state": "online", 00:20:00.238 "raid_level": "raid5f", 00:20:00.238 "superblock": true, 00:20:00.238 "num_base_bdevs": 3, 00:20:00.238 "num_base_bdevs_discovered": 3, 00:20:00.238 "num_base_bdevs_operational": 3, 00:20:00.238 "process": { 00:20:00.238 "type": "rebuild", 00:20:00.238 "target": "spare", 00:20:00.238 "progress": { 00:20:00.238 "blocks": 92160, 00:20:00.238 "percent": 72 00:20:00.238 } 00:20:00.238 }, 00:20:00.238 "base_bdevs_list": [ 00:20:00.238 { 00:20:00.238 "name": "spare", 00:20:00.238 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:20:00.238 "is_configured": true, 00:20:00.238 "data_offset": 2048, 00:20:00.238 "data_size": 63488 00:20:00.238 }, 00:20:00.238 { 00:20:00.238 "name": "BaseBdev2", 00:20:00.238 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:00.238 "is_configured": true, 00:20:00.238 "data_offset": 2048, 00:20:00.238 "data_size": 63488 00:20:00.238 }, 00:20:00.238 { 00:20:00.238 "name": "BaseBdev3", 00:20:00.238 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:00.238 "is_configured": true, 00:20:00.238 "data_offset": 2048, 00:20:00.238 "data_size": 63488 00:20:00.238 } 00:20:00.238 ] 00:20:00.238 }' 00:20:00.238 14:19:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.238 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.238 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.238 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.238 14:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:01.177 "name": "raid_bdev1", 00:20:01.177 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:01.177 "strip_size_kb": 64, 00:20:01.177 "state": "online", 00:20:01.177 "raid_level": "raid5f", 00:20:01.177 "superblock": true, 00:20:01.177 "num_base_bdevs": 3, 00:20:01.177 "num_base_bdevs_discovered": 3, 00:20:01.177 "num_base_bdevs_operational": 3, 00:20:01.177 "process": { 00:20:01.177 "type": "rebuild", 00:20:01.177 "target": "spare", 00:20:01.177 "progress": { 00:20:01.177 "blocks": 116736, 00:20:01.177 "percent": 91 00:20:01.177 } 00:20:01.177 }, 00:20:01.177 "base_bdevs_list": [ 00:20:01.177 { 00:20:01.177 "name": "spare", 00:20:01.177 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:20:01.177 "is_configured": true, 00:20:01.177 "data_offset": 2048, 00:20:01.177 "data_size": 63488 00:20:01.177 }, 00:20:01.177 { 00:20:01.177 "name": "BaseBdev2", 00:20:01.177 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:01.177 "is_configured": true, 00:20:01.177 "data_offset": 2048, 00:20:01.177 "data_size": 63488 00:20:01.177 }, 00:20:01.177 { 00:20:01.177 "name": "BaseBdev3", 00:20:01.177 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:01.177 "is_configured": true, 00:20:01.177 "data_offset": 2048, 00:20:01.177 "data_size": 63488 00:20:01.177 } 00:20:01.177 ] 00:20:01.177 }' 00:20:01.177 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:01.437 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:01.437 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:01.437 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:01.437 14:19:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:01.697 [2024-11-27 14:19:32.578998] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:01.697 [2024-11-27 14:19:32.579270] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:01.697 [2024-11-27 14:19:32.579453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.273 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:02.273 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.273 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.273 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.273 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.273 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.273 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.273 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.273 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.273 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.536 "name": "raid_bdev1", 00:20:02.536 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:02.536 "strip_size_kb": 64, 00:20:02.536 "state": "online", 00:20:02.536 "raid_level": "raid5f", 00:20:02.536 "superblock": true, 00:20:02.536 "num_base_bdevs": 3, 00:20:02.536 "num_base_bdevs_discovered": 3, 00:20:02.536 "num_base_bdevs_operational": 3, 00:20:02.536 "base_bdevs_list": [ 00:20:02.536 { 00:20:02.536 "name": "spare", 00:20:02.536 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:20:02.536 "is_configured": true, 00:20:02.536 "data_offset": 2048, 00:20:02.536 "data_size": 63488 00:20:02.536 }, 00:20:02.536 { 00:20:02.536 "name": "BaseBdev2", 00:20:02.536 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:02.536 "is_configured": true, 00:20:02.536 "data_offset": 2048, 00:20:02.536 "data_size": 63488 00:20:02.536 }, 00:20:02.536 { 00:20:02.536 "name": "BaseBdev3", 00:20:02.536 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:02.536 "is_configured": true, 00:20:02.536 "data_offset": 2048, 00:20:02.536 "data_size": 63488 00:20:02.536 } 00:20:02.536 ] 00:20:02.536 }' 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.536 "name": "raid_bdev1", 00:20:02.536 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:02.536 "strip_size_kb": 64, 00:20:02.536 "state": "online", 00:20:02.536 "raid_level": "raid5f", 00:20:02.536 "superblock": true, 00:20:02.536 "num_base_bdevs": 3, 00:20:02.536 "num_base_bdevs_discovered": 3, 00:20:02.536 "num_base_bdevs_operational": 3, 00:20:02.536 "base_bdevs_list": [ 00:20:02.536 { 00:20:02.536 "name": "spare", 00:20:02.536 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:20:02.536 "is_configured": true, 00:20:02.536 "data_offset": 2048, 00:20:02.536 "data_size": 63488 00:20:02.536 }, 00:20:02.536 { 00:20:02.536 "name": "BaseBdev2", 00:20:02.536 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:02.536 "is_configured": true, 00:20:02.536 "data_offset": 2048, 00:20:02.536 "data_size": 63488 00:20:02.536 }, 00:20:02.536 { 00:20:02.536 "name": "BaseBdev3", 00:20:02.536 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:02.536 "is_configured": true, 00:20:02.536 "data_offset": 2048, 00:20:02.536 "data_size": 63488 00:20:02.536 } 00:20:02.536 ] 00:20:02.536 }' 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.536 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.796 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.796 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.796 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.796 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.796 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.796 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.796 "name": "raid_bdev1", 00:20:02.796 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:02.796 "strip_size_kb": 64, 00:20:02.796 "state": "online", 00:20:02.796 "raid_level": "raid5f", 00:20:02.796 "superblock": true, 00:20:02.796 "num_base_bdevs": 3, 00:20:02.796 "num_base_bdevs_discovered": 3, 00:20:02.796 "num_base_bdevs_operational": 3, 00:20:02.796 "base_bdevs_list": [ 00:20:02.796 { 00:20:02.796 "name": "spare", 00:20:02.796 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:20:02.796 "is_configured": true, 00:20:02.796 "data_offset": 2048, 00:20:02.796 "data_size": 63488 00:20:02.796 }, 00:20:02.796 { 00:20:02.796 "name": "BaseBdev2", 00:20:02.796 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:02.796 "is_configured": true, 00:20:02.796 "data_offset": 2048, 00:20:02.796 "data_size": 63488 00:20:02.796 }, 00:20:02.796 { 00:20:02.796 "name": "BaseBdev3", 00:20:02.796 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:02.796 "is_configured": true, 00:20:02.796 "data_offset": 2048, 00:20:02.796 "data_size": 63488 00:20:02.796 } 00:20:02.796 ] 00:20:02.796 }' 00:20:02.796 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.796 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.056 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:03.056 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.056 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.056 [2024-11-27 14:19:33.979278] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:03.056 [2024-11-27 14:19:33.979320] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.056 [2024-11-27 14:19:33.979430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.056 [2024-11-27 14:19:33.979529] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.056 [2024-11-27 14:19:33.979548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:03.056 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.056 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.056 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.056 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.056 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:03.056 14:19:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:03.317 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:03.317 /dev/nbd0 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.576 1+0 records in 00:20:03.576 1+0 records out 00:20:03.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462617 s, 8.9 MB/s 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:03.576 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:03.834 /dev/nbd1 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.834 1+0 records in 00:20:03.834 1+0 records out 00:20:03.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332063 s, 12.3 MB/s 00:20:03.834 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.835 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:20:03.835 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.835 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:03.835 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:20:03.835 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.835 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:03.835 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:04.112 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:04.112 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:04.112 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:04.112 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:04.112 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:04.112 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.112 14:19:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:04.372 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:04.372 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:04.372 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:04.372 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.372 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.372 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:04.372 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:04.372 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.372 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.372 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.632 [2024-11-27 14:19:35.413292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:04.632 [2024-11-27 14:19:35.413476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.632 [2024-11-27 14:19:35.413545] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:04.632 [2024-11-27 14:19:35.413594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.632 [2024-11-27 14:19:35.416687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.632 [2024-11-27 14:19:35.416841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:04.632 [2024-11-27 14:19:35.416990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:04.632 [2024-11-27 14:19:35.417077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:04.632 [2024-11-27 14:19:35.417294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.632 [2024-11-27 14:19:35.417523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.632 spare 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.632 [2024-11-27 14:19:35.517476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:04.632 [2024-11-27 14:19:35.517547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:04.632 [2024-11-27 14:19:35.517980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:20:04.632 [2024-11-27 14:19:35.525262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:04.632 [2024-11-27 14:19:35.525414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:04.632 [2024-11-27 14:19:35.525756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.632 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.633 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.892 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.892 "name": "raid_bdev1", 00:20:04.892 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:04.892 "strip_size_kb": 64, 00:20:04.892 "state": "online", 00:20:04.892 "raid_level": "raid5f", 00:20:04.892 "superblock": true, 00:20:04.892 "num_base_bdevs": 3, 00:20:04.892 "num_base_bdevs_discovered": 3, 00:20:04.892 "num_base_bdevs_operational": 3, 00:20:04.892 "base_bdevs_list": [ 00:20:04.892 { 00:20:04.892 "name": "spare", 00:20:04.892 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:20:04.892 "is_configured": true, 00:20:04.892 "data_offset": 2048, 00:20:04.892 "data_size": 63488 00:20:04.892 }, 00:20:04.892 { 00:20:04.892 "name": "BaseBdev2", 00:20:04.892 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:04.892 "is_configured": true, 00:20:04.892 "data_offset": 2048, 00:20:04.892 "data_size": 63488 00:20:04.892 }, 00:20:04.892 { 00:20:04.892 "name": "BaseBdev3", 00:20:04.892 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:04.892 "is_configured": true, 00:20:04.892 "data_offset": 2048, 00:20:04.892 "data_size": 63488 00:20:04.892 } 00:20:04.892 ] 00:20:04.892 }' 00:20:04.892 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.892 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.152 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:05.152 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.152 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:05.152 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:05.152 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.152 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.152 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.152 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.152 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.152 14:19:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.152 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.152 "name": "raid_bdev1", 00:20:05.152 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:05.152 "strip_size_kb": 64, 00:20:05.152 "state": "online", 00:20:05.152 "raid_level": "raid5f", 00:20:05.152 "superblock": true, 00:20:05.152 "num_base_bdevs": 3, 00:20:05.152 "num_base_bdevs_discovered": 3, 00:20:05.152 "num_base_bdevs_operational": 3, 00:20:05.152 "base_bdevs_list": [ 00:20:05.152 { 00:20:05.152 "name": "spare", 00:20:05.152 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:20:05.152 "is_configured": true, 00:20:05.152 "data_offset": 2048, 00:20:05.152 "data_size": 63488 00:20:05.152 }, 00:20:05.152 { 00:20:05.152 "name": "BaseBdev2", 00:20:05.152 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:05.152 "is_configured": true, 00:20:05.152 "data_offset": 2048, 00:20:05.152 "data_size": 63488 00:20:05.152 }, 00:20:05.152 { 00:20:05.152 "name": "BaseBdev3", 00:20:05.152 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:05.152 "is_configured": true, 00:20:05.152 "data_offset": 2048, 00:20:05.152 "data_size": 63488 00:20:05.152 } 00:20:05.152 ] 00:20:05.152 }' 00:20:05.152 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.152 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:05.152 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:05.152 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:05.152 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.152 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.152 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.412 [2024-11-27 14:19:36.157874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.412 "name": "raid_bdev1", 00:20:05.412 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:05.412 "strip_size_kb": 64, 00:20:05.412 "state": "online", 00:20:05.412 "raid_level": "raid5f", 00:20:05.412 "superblock": true, 00:20:05.412 "num_base_bdevs": 3, 00:20:05.412 "num_base_bdevs_discovered": 2, 00:20:05.412 "num_base_bdevs_operational": 2, 00:20:05.412 "base_bdevs_list": [ 00:20:05.412 { 00:20:05.412 "name": null, 00:20:05.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.412 "is_configured": false, 00:20:05.412 "data_offset": 0, 00:20:05.412 "data_size": 63488 00:20:05.412 }, 00:20:05.412 { 00:20:05.412 "name": "BaseBdev2", 00:20:05.412 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:05.412 "is_configured": true, 00:20:05.412 "data_offset": 2048, 00:20:05.412 "data_size": 63488 00:20:05.412 }, 00:20:05.412 { 00:20:05.412 "name": "BaseBdev3", 00:20:05.412 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:05.412 "is_configured": true, 00:20:05.412 "data_offset": 2048, 00:20:05.412 "data_size": 63488 00:20:05.412 } 00:20:05.412 ] 00:20:05.412 }' 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.412 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.672 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:05.672 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.672 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.672 [2024-11-27 14:19:36.601171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:05.672 [2024-11-27 14:19:36.601426] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:05.672 [2024-11-27 14:19:36.601452] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:05.672 [2024-11-27 14:19:36.601505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:05.672 [2024-11-27 14:19:36.622302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:20:05.672 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.672 14:19:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:05.930 [2024-11-27 14:19:36.632199] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.865 "name": "raid_bdev1", 00:20:06.865 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:06.865 "strip_size_kb": 64, 00:20:06.865 "state": "online", 00:20:06.865 "raid_level": "raid5f", 00:20:06.865 "superblock": true, 00:20:06.865 "num_base_bdevs": 3, 00:20:06.865 "num_base_bdevs_discovered": 3, 00:20:06.865 "num_base_bdevs_operational": 3, 00:20:06.865 "process": { 00:20:06.865 "type": "rebuild", 00:20:06.865 "target": "spare", 00:20:06.865 "progress": { 00:20:06.865 "blocks": 18432, 00:20:06.865 "percent": 14 00:20:06.865 } 00:20:06.865 }, 00:20:06.865 "base_bdevs_list": [ 00:20:06.865 { 00:20:06.865 "name": "spare", 00:20:06.865 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:20:06.865 "is_configured": true, 00:20:06.865 "data_offset": 2048, 00:20:06.865 "data_size": 63488 00:20:06.865 }, 00:20:06.865 { 00:20:06.865 "name": "BaseBdev2", 00:20:06.865 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:06.865 "is_configured": true, 00:20:06.865 "data_offset": 2048, 00:20:06.865 "data_size": 63488 00:20:06.865 }, 00:20:06.865 { 00:20:06.865 "name": "BaseBdev3", 00:20:06.865 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:06.865 "is_configured": true, 00:20:06.865 "data_offset": 2048, 00:20:06.865 "data_size": 63488 00:20:06.865 } 00:20:06.865 ] 00:20:06.865 }' 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.865 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.865 [2024-11-27 14:19:37.780906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:07.125 [2024-11-27 14:19:37.846243] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:07.125 [2024-11-27 14:19:37.846365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.125 [2024-11-27 14:19:37.846390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:07.126 [2024-11-27 14:19:37.846404] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.126 "name": "raid_bdev1", 00:20:07.126 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:07.126 "strip_size_kb": 64, 00:20:07.126 "state": "online", 00:20:07.126 "raid_level": "raid5f", 00:20:07.126 "superblock": true, 00:20:07.126 "num_base_bdevs": 3, 00:20:07.126 "num_base_bdevs_discovered": 2, 00:20:07.126 "num_base_bdevs_operational": 2, 00:20:07.126 "base_bdevs_list": [ 00:20:07.126 { 00:20:07.126 "name": null, 00:20:07.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.126 "is_configured": false, 00:20:07.126 "data_offset": 0, 00:20:07.126 "data_size": 63488 00:20:07.126 }, 00:20:07.126 { 00:20:07.126 "name": "BaseBdev2", 00:20:07.126 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:07.126 "is_configured": true, 00:20:07.126 "data_offset": 2048, 00:20:07.126 "data_size": 63488 00:20:07.126 }, 00:20:07.126 { 00:20:07.126 "name": "BaseBdev3", 00:20:07.126 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:07.126 "is_configured": true, 00:20:07.126 "data_offset": 2048, 00:20:07.126 "data_size": 63488 00:20:07.126 } 00:20:07.126 ] 00:20:07.126 }' 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.126 14:19:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.692 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:07.692 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.692 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.692 [2024-11-27 14:19:38.365064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:07.692 [2024-11-27 14:19:38.365193] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.692 [2024-11-27 14:19:38.365222] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:07.692 [2024-11-27 14:19:38.365239] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.692 [2024-11-27 14:19:38.365857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.692 [2024-11-27 14:19:38.365896] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:07.692 [2024-11-27 14:19:38.366024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:07.692 [2024-11-27 14:19:38.366046] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:07.692 [2024-11-27 14:19:38.366060] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:07.692 [2024-11-27 14:19:38.366089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:07.692 spare 00:20:07.692 [2024-11-27 14:19:38.386541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:20:07.692 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.692 14:19:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:07.692 [2024-11-27 14:19:38.396202] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:08.627 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.627 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.627 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:08.627 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:08.627 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.627 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.627 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.627 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.627 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.627 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.627 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.627 "name": "raid_bdev1", 00:20:08.627 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:08.627 "strip_size_kb": 64, 00:20:08.627 "state": "online", 00:20:08.627 "raid_level": "raid5f", 00:20:08.627 "superblock": true, 00:20:08.627 "num_base_bdevs": 3, 00:20:08.627 "num_base_bdevs_discovered": 3, 00:20:08.627 "num_base_bdevs_operational": 3, 00:20:08.627 "process": { 00:20:08.627 "type": "rebuild", 00:20:08.627 "target": "spare", 00:20:08.627 "progress": { 00:20:08.627 "blocks": 18432, 00:20:08.627 "percent": 14 00:20:08.627 } 00:20:08.627 }, 00:20:08.627 "base_bdevs_list": [ 00:20:08.627 { 00:20:08.627 "name": "spare", 00:20:08.627 "uuid": "f12ec863-cd08-50ff-8367-22f7ecd68f52", 00:20:08.627 "is_configured": true, 00:20:08.627 "data_offset": 2048, 00:20:08.627 "data_size": 63488 00:20:08.627 }, 00:20:08.628 { 00:20:08.628 "name": "BaseBdev2", 00:20:08.628 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:08.628 "is_configured": true, 00:20:08.628 "data_offset": 2048, 00:20:08.628 "data_size": 63488 00:20:08.628 }, 00:20:08.628 { 00:20:08.628 "name": "BaseBdev3", 00:20:08.628 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:08.628 "is_configured": true, 00:20:08.628 "data_offset": 2048, 00:20:08.628 "data_size": 63488 00:20:08.628 } 00:20:08.628 ] 00:20:08.628 }' 00:20:08.628 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.628 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.628 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.628 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.628 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:08.628 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.628 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.628 [2024-11-27 14:19:39.548452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:08.885 [2024-11-27 14:19:39.609804] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:08.885 [2024-11-27 14:19:39.609908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.885 [2024-11-27 14:19:39.609934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:08.885 [2024-11-27 14:19:39.609944] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.885 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.885 "name": "raid_bdev1", 00:20:08.885 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:08.885 "strip_size_kb": 64, 00:20:08.885 "state": "online", 00:20:08.885 "raid_level": "raid5f", 00:20:08.885 "superblock": true, 00:20:08.885 "num_base_bdevs": 3, 00:20:08.885 "num_base_bdevs_discovered": 2, 00:20:08.885 "num_base_bdevs_operational": 2, 00:20:08.885 "base_bdevs_list": [ 00:20:08.885 { 00:20:08.885 "name": null, 00:20:08.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.885 "is_configured": false, 00:20:08.885 "data_offset": 0, 00:20:08.885 "data_size": 63488 00:20:08.885 }, 00:20:08.885 { 00:20:08.885 "name": "BaseBdev2", 00:20:08.885 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:08.885 "is_configured": true, 00:20:08.885 "data_offset": 2048, 00:20:08.885 "data_size": 63488 00:20:08.885 }, 00:20:08.885 { 00:20:08.885 "name": "BaseBdev3", 00:20:08.885 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:08.885 "is_configured": true, 00:20:08.885 "data_offset": 2048, 00:20:08.885 "data_size": 63488 00:20:08.886 } 00:20:08.886 ] 00:20:08.886 }' 00:20:08.886 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.886 14:19:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.452 "name": "raid_bdev1", 00:20:09.452 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:09.452 "strip_size_kb": 64, 00:20:09.452 "state": "online", 00:20:09.452 "raid_level": "raid5f", 00:20:09.452 "superblock": true, 00:20:09.452 "num_base_bdevs": 3, 00:20:09.452 "num_base_bdevs_discovered": 2, 00:20:09.452 "num_base_bdevs_operational": 2, 00:20:09.452 "base_bdevs_list": [ 00:20:09.452 { 00:20:09.452 "name": null, 00:20:09.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.452 "is_configured": false, 00:20:09.452 "data_offset": 0, 00:20:09.452 "data_size": 63488 00:20:09.452 }, 00:20:09.452 { 00:20:09.452 "name": "BaseBdev2", 00:20:09.452 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:09.452 "is_configured": true, 00:20:09.452 "data_offset": 2048, 00:20:09.452 "data_size": 63488 00:20:09.452 }, 00:20:09.452 { 00:20:09.452 "name": "BaseBdev3", 00:20:09.452 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:09.452 "is_configured": true, 00:20:09.452 "data_offset": 2048, 00:20:09.452 "data_size": 63488 00:20:09.452 } 00:20:09.452 ] 00:20:09.452 }' 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.452 [2024-11-27 14:19:40.280034] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:09.452 [2024-11-27 14:19:40.280221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.452 [2024-11-27 14:19:40.280261] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:09.452 [2024-11-27 14:19:40.280274] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.452 [2024-11-27 14:19:40.280861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.452 [2024-11-27 14:19:40.280903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:09.452 [2024-11-27 14:19:40.281022] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:09.452 [2024-11-27 14:19:40.281043] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:09.452 [2024-11-27 14:19:40.281069] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:09.452 [2024-11-27 14:19:40.281083] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:09.452 BaseBdev1 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.452 14:19:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:10.387 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:10.387 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.387 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.387 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.387 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.387 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:10.387 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.387 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.387 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.387 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.388 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.388 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.388 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.388 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.388 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.388 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.388 "name": "raid_bdev1", 00:20:10.388 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:10.388 "strip_size_kb": 64, 00:20:10.388 "state": "online", 00:20:10.388 "raid_level": "raid5f", 00:20:10.388 "superblock": true, 00:20:10.388 "num_base_bdevs": 3, 00:20:10.388 "num_base_bdevs_discovered": 2, 00:20:10.388 "num_base_bdevs_operational": 2, 00:20:10.388 "base_bdevs_list": [ 00:20:10.388 { 00:20:10.388 "name": null, 00:20:10.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.388 "is_configured": false, 00:20:10.388 "data_offset": 0, 00:20:10.388 "data_size": 63488 00:20:10.388 }, 00:20:10.388 { 00:20:10.388 "name": "BaseBdev2", 00:20:10.388 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:10.388 "is_configured": true, 00:20:10.388 "data_offset": 2048, 00:20:10.388 "data_size": 63488 00:20:10.388 }, 00:20:10.388 { 00:20:10.388 "name": "BaseBdev3", 00:20:10.388 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:10.388 "is_configured": true, 00:20:10.388 "data_offset": 2048, 00:20:10.388 "data_size": 63488 00:20:10.388 } 00:20:10.388 ] 00:20:10.388 }' 00:20:10.388 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.388 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.957 "name": "raid_bdev1", 00:20:10.957 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:10.957 "strip_size_kb": 64, 00:20:10.957 "state": "online", 00:20:10.957 "raid_level": "raid5f", 00:20:10.957 "superblock": true, 00:20:10.957 "num_base_bdevs": 3, 00:20:10.957 "num_base_bdevs_discovered": 2, 00:20:10.957 "num_base_bdevs_operational": 2, 00:20:10.957 "base_bdevs_list": [ 00:20:10.957 { 00:20:10.957 "name": null, 00:20:10.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.957 "is_configured": false, 00:20:10.957 "data_offset": 0, 00:20:10.957 "data_size": 63488 00:20:10.957 }, 00:20:10.957 { 00:20:10.957 "name": "BaseBdev2", 00:20:10.957 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:10.957 "is_configured": true, 00:20:10.957 "data_offset": 2048, 00:20:10.957 "data_size": 63488 00:20:10.957 }, 00:20:10.957 { 00:20:10.957 "name": "BaseBdev3", 00:20:10.957 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:10.957 "is_configured": true, 00:20:10.957 "data_offset": 2048, 00:20:10.957 "data_size": 63488 00:20:10.957 } 00:20:10.957 ] 00:20:10.957 }' 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.957 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.957 [2024-11-27 14:19:41.908100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.957 [2024-11-27 14:19:41.908409] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:10.957 [2024-11-27 14:19:41.908489] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:11.216 request: 00:20:11.216 { 00:20:11.216 "base_bdev": "BaseBdev1", 00:20:11.216 "raid_bdev": "raid_bdev1", 00:20:11.216 "method": "bdev_raid_add_base_bdev", 00:20:11.216 "req_id": 1 00:20:11.216 } 00:20:11.216 Got JSON-RPC error response 00:20:11.216 response: 00:20:11.216 { 00:20:11.216 "code": -22, 00:20:11.216 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:11.216 } 00:20:11.216 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:11.216 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:11.216 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:11.216 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:11.216 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:11.216 14:19:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:12.152 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:12.152 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.152 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.152 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:12.152 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.153 "name": "raid_bdev1", 00:20:12.153 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:12.153 "strip_size_kb": 64, 00:20:12.153 "state": "online", 00:20:12.153 "raid_level": "raid5f", 00:20:12.153 "superblock": true, 00:20:12.153 "num_base_bdevs": 3, 00:20:12.153 "num_base_bdevs_discovered": 2, 00:20:12.153 "num_base_bdevs_operational": 2, 00:20:12.153 "base_bdevs_list": [ 00:20:12.153 { 00:20:12.153 "name": null, 00:20:12.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.153 "is_configured": false, 00:20:12.153 "data_offset": 0, 00:20:12.153 "data_size": 63488 00:20:12.153 }, 00:20:12.153 { 00:20:12.153 "name": "BaseBdev2", 00:20:12.153 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:12.153 "is_configured": true, 00:20:12.153 "data_offset": 2048, 00:20:12.153 "data_size": 63488 00:20:12.153 }, 00:20:12.153 { 00:20:12.153 "name": "BaseBdev3", 00:20:12.153 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:12.153 "is_configured": true, 00:20:12.153 "data_offset": 2048, 00:20:12.153 "data_size": 63488 00:20:12.153 } 00:20:12.153 ] 00:20:12.153 }' 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.153 14:19:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:12.722 "name": "raid_bdev1", 00:20:12.722 "uuid": "934d62c0-d157-4861-bcb8-f87a7c022654", 00:20:12.722 "strip_size_kb": 64, 00:20:12.722 "state": "online", 00:20:12.722 "raid_level": "raid5f", 00:20:12.722 "superblock": true, 00:20:12.722 "num_base_bdevs": 3, 00:20:12.722 "num_base_bdevs_discovered": 2, 00:20:12.722 "num_base_bdevs_operational": 2, 00:20:12.722 "base_bdevs_list": [ 00:20:12.722 { 00:20:12.722 "name": null, 00:20:12.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.722 "is_configured": false, 00:20:12.722 "data_offset": 0, 00:20:12.722 "data_size": 63488 00:20:12.722 }, 00:20:12.722 { 00:20:12.722 "name": "BaseBdev2", 00:20:12.722 "uuid": "cd55fc04-7ec7-5e19-bb4b-c806f2df9dba", 00:20:12.722 "is_configured": true, 00:20:12.722 "data_offset": 2048, 00:20:12.722 "data_size": 63488 00:20:12.722 }, 00:20:12.722 { 00:20:12.722 "name": "BaseBdev3", 00:20:12.722 "uuid": "9bf28b6c-25f6-58f5-b085-dc00989e471c", 00:20:12.722 "is_configured": true, 00:20:12.722 "data_offset": 2048, 00:20:12.722 "data_size": 63488 00:20:12.722 } 00:20:12.722 ] 00:20:12.722 }' 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82246 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82246 ']' 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82246 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82246 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.722 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82246' 00:20:12.722 killing process with pid 82246 00:20:12.722 Received shutdown signal, test time was about 60.000000 seconds 00:20:12.722 00:20:12.722 Latency(us) 00:20:12.722 [2024-11-27T14:19:43.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.723 [2024-11-27T14:19:43.679Z] =================================================================================================================== 00:20:12.723 [2024-11-27T14:19:43.679Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:12.723 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82246 00:20:12.723 [2024-11-27 14:19:43.561444] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:12.723 14:19:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82246 00:20:12.723 [2024-11-27 14:19:43.561606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.723 [2024-11-27 14:19:43.561690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.723 [2024-11-27 14:19:43.561707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:13.292 [2024-11-27 14:19:44.041715] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:14.671 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:14.671 00:20:14.671 real 0m24.185s 00:20:14.671 user 0m31.019s 00:20:14.671 sys 0m2.812s 00:20:14.671 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.671 ************************************ 00:20:14.671 END TEST raid5f_rebuild_test_sb 00:20:14.671 ************************************ 00:20:14.671 14:19:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.671 14:19:45 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:20:14.671 14:19:45 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:20:14.671 14:19:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:14.671 14:19:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.671 14:19:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:14.671 ************************************ 00:20:14.671 START TEST raid5f_state_function_test 00:20:14.671 ************************************ 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:14.671 Process raid pid: 83009 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83009 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83009' 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83009 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83009 ']' 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.671 14:19:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.671 [2024-11-27 14:19:45.543389] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:14.671 [2024-11-27 14:19:45.543634] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.021 [2024-11-27 14:19:45.725344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.021 [2024-11-27 14:19:45.861045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.280 [2024-11-27 14:19:46.110645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.281 [2024-11-27 14:19:46.110700] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.538 [2024-11-27 14:19:46.477087] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:15.538 [2024-11-27 14:19:46.477188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:15.538 [2024-11-27 14:19:46.477201] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:15.538 [2024-11-27 14:19:46.477213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:15.538 [2024-11-27 14:19:46.477221] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:15.538 [2024-11-27 14:19:46.477231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:15.538 [2024-11-27 14:19:46.477239] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:15.538 [2024-11-27 14:19:46.477249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.538 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.795 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.795 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.795 "name": "Existed_Raid", 00:20:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.795 "strip_size_kb": 64, 00:20:15.795 "state": "configuring", 00:20:15.795 "raid_level": "raid5f", 00:20:15.795 "superblock": false, 00:20:15.795 "num_base_bdevs": 4, 00:20:15.795 "num_base_bdevs_discovered": 0, 00:20:15.795 "num_base_bdevs_operational": 4, 00:20:15.795 "base_bdevs_list": [ 00:20:15.795 { 00:20:15.795 "name": "BaseBdev1", 00:20:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.795 "is_configured": false, 00:20:15.795 "data_offset": 0, 00:20:15.795 "data_size": 0 00:20:15.795 }, 00:20:15.795 { 00:20:15.795 "name": "BaseBdev2", 00:20:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.795 "is_configured": false, 00:20:15.795 "data_offset": 0, 00:20:15.795 "data_size": 0 00:20:15.795 }, 00:20:15.795 { 00:20:15.795 "name": "BaseBdev3", 00:20:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.795 "is_configured": false, 00:20:15.795 "data_offset": 0, 00:20:15.795 "data_size": 0 00:20:15.795 }, 00:20:15.795 { 00:20:15.795 "name": "BaseBdev4", 00:20:15.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.795 "is_configured": false, 00:20:15.795 "data_offset": 0, 00:20:15.795 "data_size": 0 00:20:15.795 } 00:20:15.795 ] 00:20:15.795 }' 00:20:15.795 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.795 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:16.053 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.053 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 [2024-11-27 14:19:46.952281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:16.053 [2024-11-27 14:19:46.952408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:16.053 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.053 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:16.053 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.053 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 [2024-11-27 14:19:46.964288] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:16.053 [2024-11-27 14:19:46.964429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:16.053 [2024-11-27 14:19:46.964473] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:16.053 [2024-11-27 14:19:46.964526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:16.053 [2024-11-27 14:19:46.964569] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:16.053 [2024-11-27 14:19:46.964615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:16.053 [2024-11-27 14:19:46.964661] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:16.053 [2024-11-27 14:19:46.964709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:16.053 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.053 14:19:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:16.053 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.053 14:19:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.312 [2024-11-27 14:19:47.017607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.312 BaseBdev1 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.312 [ 00:20:16.312 { 00:20:16.312 "name": "BaseBdev1", 00:20:16.312 "aliases": [ 00:20:16.312 "137d389d-adad-4e38-baa6-31dce877b374" 00:20:16.312 ], 00:20:16.312 "product_name": "Malloc disk", 00:20:16.312 "block_size": 512, 00:20:16.312 "num_blocks": 65536, 00:20:16.312 "uuid": "137d389d-adad-4e38-baa6-31dce877b374", 00:20:16.312 "assigned_rate_limits": { 00:20:16.312 "rw_ios_per_sec": 0, 00:20:16.312 "rw_mbytes_per_sec": 0, 00:20:16.312 "r_mbytes_per_sec": 0, 00:20:16.312 "w_mbytes_per_sec": 0 00:20:16.312 }, 00:20:16.312 "claimed": true, 00:20:16.312 "claim_type": "exclusive_write", 00:20:16.312 "zoned": false, 00:20:16.312 "supported_io_types": { 00:20:16.312 "read": true, 00:20:16.312 "write": true, 00:20:16.312 "unmap": true, 00:20:16.312 "flush": true, 00:20:16.312 "reset": true, 00:20:16.312 "nvme_admin": false, 00:20:16.312 "nvme_io": false, 00:20:16.312 "nvme_io_md": false, 00:20:16.312 "write_zeroes": true, 00:20:16.312 "zcopy": true, 00:20:16.312 "get_zone_info": false, 00:20:16.312 "zone_management": false, 00:20:16.312 "zone_append": false, 00:20:16.312 "compare": false, 00:20:16.312 "compare_and_write": false, 00:20:16.312 "abort": true, 00:20:16.312 "seek_hole": false, 00:20:16.312 "seek_data": false, 00:20:16.312 "copy": true, 00:20:16.312 "nvme_iov_md": false 00:20:16.312 }, 00:20:16.312 "memory_domains": [ 00:20:16.312 { 00:20:16.312 "dma_device_id": "system", 00:20:16.312 "dma_device_type": 1 00:20:16.312 }, 00:20:16.312 { 00:20:16.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.312 "dma_device_type": 2 00:20:16.312 } 00:20:16.312 ], 00:20:16.312 "driver_specific": {} 00:20:16.312 } 00:20:16.312 ] 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.312 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.312 "name": "Existed_Raid", 00:20:16.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.312 "strip_size_kb": 64, 00:20:16.312 "state": "configuring", 00:20:16.312 "raid_level": "raid5f", 00:20:16.312 "superblock": false, 00:20:16.312 "num_base_bdevs": 4, 00:20:16.312 "num_base_bdevs_discovered": 1, 00:20:16.312 "num_base_bdevs_operational": 4, 00:20:16.312 "base_bdevs_list": [ 00:20:16.312 { 00:20:16.312 "name": "BaseBdev1", 00:20:16.312 "uuid": "137d389d-adad-4e38-baa6-31dce877b374", 00:20:16.312 "is_configured": true, 00:20:16.312 "data_offset": 0, 00:20:16.312 "data_size": 65536 00:20:16.312 }, 00:20:16.312 { 00:20:16.313 "name": "BaseBdev2", 00:20:16.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.313 "is_configured": false, 00:20:16.313 "data_offset": 0, 00:20:16.313 "data_size": 0 00:20:16.313 }, 00:20:16.313 { 00:20:16.313 "name": "BaseBdev3", 00:20:16.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.313 "is_configured": false, 00:20:16.313 "data_offset": 0, 00:20:16.313 "data_size": 0 00:20:16.313 }, 00:20:16.313 { 00:20:16.313 "name": "BaseBdev4", 00:20:16.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.313 "is_configured": false, 00:20:16.313 "data_offset": 0, 00:20:16.313 "data_size": 0 00:20:16.313 } 00:20:16.313 ] 00:20:16.313 }' 00:20:16.313 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.313 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.571 [2024-11-27 14:19:47.492946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:16.571 [2024-11-27 14:19:47.493020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.571 [2024-11-27 14:19:47.501024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.571 [2024-11-27 14:19:47.503270] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:16.571 [2024-11-27 14:19:47.503388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:16.571 [2024-11-27 14:19:47.503411] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:16.571 [2024-11-27 14:19:47.503427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:16.571 [2024-11-27 14:19:47.503435] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:16.571 [2024-11-27 14:19:47.503446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.571 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.828 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.828 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.828 "name": "Existed_Raid", 00:20:16.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.828 "strip_size_kb": 64, 00:20:16.828 "state": "configuring", 00:20:16.828 "raid_level": "raid5f", 00:20:16.828 "superblock": false, 00:20:16.828 "num_base_bdevs": 4, 00:20:16.828 "num_base_bdevs_discovered": 1, 00:20:16.828 "num_base_bdevs_operational": 4, 00:20:16.828 "base_bdevs_list": [ 00:20:16.828 { 00:20:16.828 "name": "BaseBdev1", 00:20:16.828 "uuid": "137d389d-adad-4e38-baa6-31dce877b374", 00:20:16.828 "is_configured": true, 00:20:16.828 "data_offset": 0, 00:20:16.828 "data_size": 65536 00:20:16.828 }, 00:20:16.828 { 00:20:16.828 "name": "BaseBdev2", 00:20:16.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.828 "is_configured": false, 00:20:16.828 "data_offset": 0, 00:20:16.828 "data_size": 0 00:20:16.828 }, 00:20:16.828 { 00:20:16.828 "name": "BaseBdev3", 00:20:16.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.828 "is_configured": false, 00:20:16.828 "data_offset": 0, 00:20:16.828 "data_size": 0 00:20:16.828 }, 00:20:16.828 { 00:20:16.828 "name": "BaseBdev4", 00:20:16.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.828 "is_configured": false, 00:20:16.828 "data_offset": 0, 00:20:16.828 "data_size": 0 00:20:16.828 } 00:20:16.828 ] 00:20:16.828 }' 00:20:16.828 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.828 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.086 [2024-11-27 14:19:47.970053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.086 BaseBdev2 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.086 14:19:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.086 [ 00:20:17.086 { 00:20:17.086 "name": "BaseBdev2", 00:20:17.086 "aliases": [ 00:20:17.086 "04d6d8ad-5550-4d5d-b4cc-c24bf939eaf0" 00:20:17.086 ], 00:20:17.086 "product_name": "Malloc disk", 00:20:17.086 "block_size": 512, 00:20:17.086 "num_blocks": 65536, 00:20:17.086 "uuid": "04d6d8ad-5550-4d5d-b4cc-c24bf939eaf0", 00:20:17.086 "assigned_rate_limits": { 00:20:17.086 "rw_ios_per_sec": 0, 00:20:17.086 "rw_mbytes_per_sec": 0, 00:20:17.086 "r_mbytes_per_sec": 0, 00:20:17.086 "w_mbytes_per_sec": 0 00:20:17.086 }, 00:20:17.086 "claimed": true, 00:20:17.086 "claim_type": "exclusive_write", 00:20:17.086 "zoned": false, 00:20:17.086 "supported_io_types": { 00:20:17.086 "read": true, 00:20:17.086 "write": true, 00:20:17.086 "unmap": true, 00:20:17.086 "flush": true, 00:20:17.086 "reset": true, 00:20:17.086 "nvme_admin": false, 00:20:17.086 "nvme_io": false, 00:20:17.086 "nvme_io_md": false, 00:20:17.086 "write_zeroes": true, 00:20:17.086 "zcopy": true, 00:20:17.086 "get_zone_info": false, 00:20:17.086 "zone_management": false, 00:20:17.086 "zone_append": false, 00:20:17.086 "compare": false, 00:20:17.086 "compare_and_write": false, 00:20:17.086 "abort": true, 00:20:17.086 "seek_hole": false, 00:20:17.086 "seek_data": false, 00:20:17.086 "copy": true, 00:20:17.086 "nvme_iov_md": false 00:20:17.086 }, 00:20:17.086 "memory_domains": [ 00:20:17.086 { 00:20:17.086 "dma_device_id": "system", 00:20:17.086 "dma_device_type": 1 00:20:17.086 }, 00:20:17.086 { 00:20:17.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.086 "dma_device_type": 2 00:20:17.086 } 00:20:17.086 ], 00:20:17.086 "driver_specific": {} 00:20:17.086 } 00:20:17.086 ] 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.086 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.345 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.345 "name": "Existed_Raid", 00:20:17.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.345 "strip_size_kb": 64, 00:20:17.345 "state": "configuring", 00:20:17.345 "raid_level": "raid5f", 00:20:17.345 "superblock": false, 00:20:17.345 "num_base_bdevs": 4, 00:20:17.345 "num_base_bdevs_discovered": 2, 00:20:17.345 "num_base_bdevs_operational": 4, 00:20:17.345 "base_bdevs_list": [ 00:20:17.345 { 00:20:17.345 "name": "BaseBdev1", 00:20:17.345 "uuid": "137d389d-adad-4e38-baa6-31dce877b374", 00:20:17.345 "is_configured": true, 00:20:17.345 "data_offset": 0, 00:20:17.345 "data_size": 65536 00:20:17.345 }, 00:20:17.345 { 00:20:17.345 "name": "BaseBdev2", 00:20:17.345 "uuid": "04d6d8ad-5550-4d5d-b4cc-c24bf939eaf0", 00:20:17.345 "is_configured": true, 00:20:17.345 "data_offset": 0, 00:20:17.345 "data_size": 65536 00:20:17.345 }, 00:20:17.345 { 00:20:17.345 "name": "BaseBdev3", 00:20:17.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.345 "is_configured": false, 00:20:17.345 "data_offset": 0, 00:20:17.345 "data_size": 0 00:20:17.345 }, 00:20:17.345 { 00:20:17.345 "name": "BaseBdev4", 00:20:17.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.345 "is_configured": false, 00:20:17.345 "data_offset": 0, 00:20:17.345 "data_size": 0 00:20:17.345 } 00:20:17.345 ] 00:20:17.345 }' 00:20:17.345 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.345 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.604 [2024-11-27 14:19:48.514518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:17.604 BaseBdev3 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.604 [ 00:20:17.604 { 00:20:17.604 "name": "BaseBdev3", 00:20:17.604 "aliases": [ 00:20:17.604 "5d238542-3762-49f3-99d4-40c8417d9e3d" 00:20:17.604 ], 00:20:17.604 "product_name": "Malloc disk", 00:20:17.604 "block_size": 512, 00:20:17.604 "num_blocks": 65536, 00:20:17.604 "uuid": "5d238542-3762-49f3-99d4-40c8417d9e3d", 00:20:17.604 "assigned_rate_limits": { 00:20:17.604 "rw_ios_per_sec": 0, 00:20:17.604 "rw_mbytes_per_sec": 0, 00:20:17.604 "r_mbytes_per_sec": 0, 00:20:17.604 "w_mbytes_per_sec": 0 00:20:17.604 }, 00:20:17.604 "claimed": true, 00:20:17.604 "claim_type": "exclusive_write", 00:20:17.604 "zoned": false, 00:20:17.604 "supported_io_types": { 00:20:17.604 "read": true, 00:20:17.604 "write": true, 00:20:17.604 "unmap": true, 00:20:17.604 "flush": true, 00:20:17.604 "reset": true, 00:20:17.604 "nvme_admin": false, 00:20:17.604 "nvme_io": false, 00:20:17.604 "nvme_io_md": false, 00:20:17.604 "write_zeroes": true, 00:20:17.604 "zcopy": true, 00:20:17.604 "get_zone_info": false, 00:20:17.604 "zone_management": false, 00:20:17.604 "zone_append": false, 00:20:17.604 "compare": false, 00:20:17.604 "compare_and_write": false, 00:20:17.604 "abort": true, 00:20:17.604 "seek_hole": false, 00:20:17.604 "seek_data": false, 00:20:17.604 "copy": true, 00:20:17.604 "nvme_iov_md": false 00:20:17.604 }, 00:20:17.604 "memory_domains": [ 00:20:17.604 { 00:20:17.604 "dma_device_id": "system", 00:20:17.604 "dma_device_type": 1 00:20:17.604 }, 00:20:17.604 { 00:20:17.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.604 "dma_device_type": 2 00:20:17.604 } 00:20:17.604 ], 00:20:17.604 "driver_specific": {} 00:20:17.604 } 00:20:17.604 ] 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:17.604 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:17.605 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:17.605 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.605 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.605 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.605 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.605 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:17.605 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.605 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.605 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.605 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.863 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.863 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.863 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.863 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.863 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.863 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.863 "name": "Existed_Raid", 00:20:17.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.863 "strip_size_kb": 64, 00:20:17.863 "state": "configuring", 00:20:17.863 "raid_level": "raid5f", 00:20:17.863 "superblock": false, 00:20:17.863 "num_base_bdevs": 4, 00:20:17.863 "num_base_bdevs_discovered": 3, 00:20:17.863 "num_base_bdevs_operational": 4, 00:20:17.863 "base_bdevs_list": [ 00:20:17.863 { 00:20:17.863 "name": "BaseBdev1", 00:20:17.863 "uuid": "137d389d-adad-4e38-baa6-31dce877b374", 00:20:17.863 "is_configured": true, 00:20:17.863 "data_offset": 0, 00:20:17.863 "data_size": 65536 00:20:17.863 }, 00:20:17.863 { 00:20:17.863 "name": "BaseBdev2", 00:20:17.863 "uuid": "04d6d8ad-5550-4d5d-b4cc-c24bf939eaf0", 00:20:17.863 "is_configured": true, 00:20:17.863 "data_offset": 0, 00:20:17.863 "data_size": 65536 00:20:17.863 }, 00:20:17.863 { 00:20:17.863 "name": "BaseBdev3", 00:20:17.863 "uuid": "5d238542-3762-49f3-99d4-40c8417d9e3d", 00:20:17.863 "is_configured": true, 00:20:17.863 "data_offset": 0, 00:20:17.863 "data_size": 65536 00:20:17.863 }, 00:20:17.863 { 00:20:17.863 "name": "BaseBdev4", 00:20:17.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.863 "is_configured": false, 00:20:17.863 "data_offset": 0, 00:20:17.863 "data_size": 0 00:20:17.863 } 00:20:17.863 ] 00:20:17.863 }' 00:20:17.863 14:19:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.863 14:19:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.122 [2024-11-27 14:19:49.055375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:18.122 [2024-11-27 14:19:49.055543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:18.122 [2024-11-27 14:19:49.055574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:18.122 [2024-11-27 14:19:49.055920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:18.122 [2024-11-27 14:19:49.063855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:18.122 [2024-11-27 14:19:49.063977] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:18.122 [2024-11-27 14:19:49.064415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.122 BaseBdev4 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.122 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.393 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.394 [ 00:20:18.394 { 00:20:18.394 "name": "BaseBdev4", 00:20:18.394 "aliases": [ 00:20:18.394 "da6914dc-5b15-4624-bee0-ffe44775ea99" 00:20:18.394 ], 00:20:18.394 "product_name": "Malloc disk", 00:20:18.394 "block_size": 512, 00:20:18.394 "num_blocks": 65536, 00:20:18.394 "uuid": "da6914dc-5b15-4624-bee0-ffe44775ea99", 00:20:18.394 "assigned_rate_limits": { 00:20:18.394 "rw_ios_per_sec": 0, 00:20:18.394 "rw_mbytes_per_sec": 0, 00:20:18.394 "r_mbytes_per_sec": 0, 00:20:18.394 "w_mbytes_per_sec": 0 00:20:18.394 }, 00:20:18.394 "claimed": true, 00:20:18.394 "claim_type": "exclusive_write", 00:20:18.394 "zoned": false, 00:20:18.394 "supported_io_types": { 00:20:18.394 "read": true, 00:20:18.394 "write": true, 00:20:18.394 "unmap": true, 00:20:18.394 "flush": true, 00:20:18.394 "reset": true, 00:20:18.394 "nvme_admin": false, 00:20:18.394 "nvme_io": false, 00:20:18.394 "nvme_io_md": false, 00:20:18.394 "write_zeroes": true, 00:20:18.394 "zcopy": true, 00:20:18.394 "get_zone_info": false, 00:20:18.394 "zone_management": false, 00:20:18.394 "zone_append": false, 00:20:18.394 "compare": false, 00:20:18.394 "compare_and_write": false, 00:20:18.394 "abort": true, 00:20:18.394 "seek_hole": false, 00:20:18.394 "seek_data": false, 00:20:18.394 "copy": true, 00:20:18.394 "nvme_iov_md": false 00:20:18.394 }, 00:20:18.394 "memory_domains": [ 00:20:18.394 { 00:20:18.394 "dma_device_id": "system", 00:20:18.394 "dma_device_type": 1 00:20:18.394 }, 00:20:18.394 { 00:20:18.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.394 "dma_device_type": 2 00:20:18.394 } 00:20:18.394 ], 00:20:18.394 "driver_specific": {} 00:20:18.394 } 00:20:18.394 ] 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.394 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.394 "name": "Existed_Raid", 00:20:18.394 "uuid": "2b175fab-f219-448d-bb19-a5ba76862ef1", 00:20:18.394 "strip_size_kb": 64, 00:20:18.394 "state": "online", 00:20:18.394 "raid_level": "raid5f", 00:20:18.394 "superblock": false, 00:20:18.394 "num_base_bdevs": 4, 00:20:18.394 "num_base_bdevs_discovered": 4, 00:20:18.394 "num_base_bdevs_operational": 4, 00:20:18.394 "base_bdevs_list": [ 00:20:18.394 { 00:20:18.394 "name": "BaseBdev1", 00:20:18.394 "uuid": "137d389d-adad-4e38-baa6-31dce877b374", 00:20:18.394 "is_configured": true, 00:20:18.394 "data_offset": 0, 00:20:18.395 "data_size": 65536 00:20:18.395 }, 00:20:18.395 { 00:20:18.395 "name": "BaseBdev2", 00:20:18.395 "uuid": "04d6d8ad-5550-4d5d-b4cc-c24bf939eaf0", 00:20:18.395 "is_configured": true, 00:20:18.395 "data_offset": 0, 00:20:18.395 "data_size": 65536 00:20:18.395 }, 00:20:18.395 { 00:20:18.395 "name": "BaseBdev3", 00:20:18.395 "uuid": "5d238542-3762-49f3-99d4-40c8417d9e3d", 00:20:18.395 "is_configured": true, 00:20:18.395 "data_offset": 0, 00:20:18.395 "data_size": 65536 00:20:18.395 }, 00:20:18.395 { 00:20:18.395 "name": "BaseBdev4", 00:20:18.395 "uuid": "da6914dc-5b15-4624-bee0-ffe44775ea99", 00:20:18.395 "is_configured": true, 00:20:18.395 "data_offset": 0, 00:20:18.395 "data_size": 65536 00:20:18.395 } 00:20:18.395 ] 00:20:18.395 }' 00:20:18.395 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.395 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.654 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:18.654 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:18.654 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:18.654 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:18.654 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:18.654 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:18.654 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:18.654 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.654 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.654 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:18.654 [2024-11-27 14:19:49.581220] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.654 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:18.914 "name": "Existed_Raid", 00:20:18.914 "aliases": [ 00:20:18.914 "2b175fab-f219-448d-bb19-a5ba76862ef1" 00:20:18.914 ], 00:20:18.914 "product_name": "Raid Volume", 00:20:18.914 "block_size": 512, 00:20:18.914 "num_blocks": 196608, 00:20:18.914 "uuid": "2b175fab-f219-448d-bb19-a5ba76862ef1", 00:20:18.914 "assigned_rate_limits": { 00:20:18.914 "rw_ios_per_sec": 0, 00:20:18.914 "rw_mbytes_per_sec": 0, 00:20:18.914 "r_mbytes_per_sec": 0, 00:20:18.914 "w_mbytes_per_sec": 0 00:20:18.914 }, 00:20:18.914 "claimed": false, 00:20:18.914 "zoned": false, 00:20:18.914 "supported_io_types": { 00:20:18.914 "read": true, 00:20:18.914 "write": true, 00:20:18.914 "unmap": false, 00:20:18.914 "flush": false, 00:20:18.914 "reset": true, 00:20:18.914 "nvme_admin": false, 00:20:18.914 "nvme_io": false, 00:20:18.914 "nvme_io_md": false, 00:20:18.914 "write_zeroes": true, 00:20:18.914 "zcopy": false, 00:20:18.914 "get_zone_info": false, 00:20:18.914 "zone_management": false, 00:20:18.914 "zone_append": false, 00:20:18.914 "compare": false, 00:20:18.914 "compare_and_write": false, 00:20:18.914 "abort": false, 00:20:18.914 "seek_hole": false, 00:20:18.914 "seek_data": false, 00:20:18.914 "copy": false, 00:20:18.914 "nvme_iov_md": false 00:20:18.914 }, 00:20:18.914 "driver_specific": { 00:20:18.914 "raid": { 00:20:18.914 "uuid": "2b175fab-f219-448d-bb19-a5ba76862ef1", 00:20:18.914 "strip_size_kb": 64, 00:20:18.914 "state": "online", 00:20:18.914 "raid_level": "raid5f", 00:20:18.914 "superblock": false, 00:20:18.914 "num_base_bdevs": 4, 00:20:18.914 "num_base_bdevs_discovered": 4, 00:20:18.914 "num_base_bdevs_operational": 4, 00:20:18.914 "base_bdevs_list": [ 00:20:18.914 { 00:20:18.914 "name": "BaseBdev1", 00:20:18.914 "uuid": "137d389d-adad-4e38-baa6-31dce877b374", 00:20:18.914 "is_configured": true, 00:20:18.914 "data_offset": 0, 00:20:18.914 "data_size": 65536 00:20:18.914 }, 00:20:18.914 { 00:20:18.914 "name": "BaseBdev2", 00:20:18.914 "uuid": "04d6d8ad-5550-4d5d-b4cc-c24bf939eaf0", 00:20:18.914 "is_configured": true, 00:20:18.914 "data_offset": 0, 00:20:18.914 "data_size": 65536 00:20:18.914 }, 00:20:18.914 { 00:20:18.914 "name": "BaseBdev3", 00:20:18.914 "uuid": "5d238542-3762-49f3-99d4-40c8417d9e3d", 00:20:18.914 "is_configured": true, 00:20:18.914 "data_offset": 0, 00:20:18.914 "data_size": 65536 00:20:18.914 }, 00:20:18.914 { 00:20:18.914 "name": "BaseBdev4", 00:20:18.914 "uuid": "da6914dc-5b15-4624-bee0-ffe44775ea99", 00:20:18.914 "is_configured": true, 00:20:18.914 "data_offset": 0, 00:20:18.914 "data_size": 65536 00:20:18.914 } 00:20:18.914 ] 00:20:18.914 } 00:20:18.914 } 00:20:18.914 }' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:18.914 BaseBdev2 00:20:18.914 BaseBdev3 00:20:18.914 BaseBdev4' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.914 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.175 [2024-11-27 14:19:49.880524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.175 14:19:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.175 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.175 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.175 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.175 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.175 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.175 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.175 "name": "Existed_Raid", 00:20:19.175 "uuid": "2b175fab-f219-448d-bb19-a5ba76862ef1", 00:20:19.175 "strip_size_kb": 64, 00:20:19.175 "state": "online", 00:20:19.175 "raid_level": "raid5f", 00:20:19.175 "superblock": false, 00:20:19.175 "num_base_bdevs": 4, 00:20:19.175 "num_base_bdevs_discovered": 3, 00:20:19.175 "num_base_bdevs_operational": 3, 00:20:19.175 "base_bdevs_list": [ 00:20:19.175 { 00:20:19.175 "name": null, 00:20:19.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.175 "is_configured": false, 00:20:19.175 "data_offset": 0, 00:20:19.175 "data_size": 65536 00:20:19.175 }, 00:20:19.175 { 00:20:19.175 "name": "BaseBdev2", 00:20:19.175 "uuid": "04d6d8ad-5550-4d5d-b4cc-c24bf939eaf0", 00:20:19.175 "is_configured": true, 00:20:19.175 "data_offset": 0, 00:20:19.175 "data_size": 65536 00:20:19.175 }, 00:20:19.175 { 00:20:19.175 "name": "BaseBdev3", 00:20:19.175 "uuid": "5d238542-3762-49f3-99d4-40c8417d9e3d", 00:20:19.175 "is_configured": true, 00:20:19.175 "data_offset": 0, 00:20:19.175 "data_size": 65536 00:20:19.175 }, 00:20:19.175 { 00:20:19.175 "name": "BaseBdev4", 00:20:19.175 "uuid": "da6914dc-5b15-4624-bee0-ffe44775ea99", 00:20:19.175 "is_configured": true, 00:20:19.175 "data_offset": 0, 00:20:19.175 "data_size": 65536 00:20:19.175 } 00:20:19.175 ] 00:20:19.175 }' 00:20:19.175 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.175 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.753 [2024-11-27 14:19:50.499824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:19.753 [2024-11-27 14:19:50.499953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.753 [2024-11-27 14:19:50.609768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.753 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.754 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.754 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:19.754 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.754 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:19.754 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:19.754 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:19.754 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.754 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.754 [2024-11-27 14:19:50.669722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:20.012 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.012 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:20.012 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.013 [2024-11-27 14:19:50.845570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:20.013 [2024-11-27 14:19:50.845639] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:20.013 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.272 14:19:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.272 BaseBdev2 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.272 [ 00:20:20.272 { 00:20:20.272 "name": "BaseBdev2", 00:20:20.272 "aliases": [ 00:20:20.272 "e612d901-ffc5-4f7b-9fd8-1dac2ec320f8" 00:20:20.272 ], 00:20:20.272 "product_name": "Malloc disk", 00:20:20.272 "block_size": 512, 00:20:20.272 "num_blocks": 65536, 00:20:20.272 "uuid": "e612d901-ffc5-4f7b-9fd8-1dac2ec320f8", 00:20:20.272 "assigned_rate_limits": { 00:20:20.272 "rw_ios_per_sec": 0, 00:20:20.272 "rw_mbytes_per_sec": 0, 00:20:20.272 "r_mbytes_per_sec": 0, 00:20:20.272 "w_mbytes_per_sec": 0 00:20:20.272 }, 00:20:20.272 "claimed": false, 00:20:20.272 "zoned": false, 00:20:20.272 "supported_io_types": { 00:20:20.272 "read": true, 00:20:20.272 "write": true, 00:20:20.272 "unmap": true, 00:20:20.272 "flush": true, 00:20:20.272 "reset": true, 00:20:20.272 "nvme_admin": false, 00:20:20.272 "nvme_io": false, 00:20:20.272 "nvme_io_md": false, 00:20:20.272 "write_zeroes": true, 00:20:20.272 "zcopy": true, 00:20:20.272 "get_zone_info": false, 00:20:20.272 "zone_management": false, 00:20:20.272 "zone_append": false, 00:20:20.272 "compare": false, 00:20:20.272 "compare_and_write": false, 00:20:20.272 "abort": true, 00:20:20.272 "seek_hole": false, 00:20:20.272 "seek_data": false, 00:20:20.272 "copy": true, 00:20:20.272 "nvme_iov_md": false 00:20:20.272 }, 00:20:20.272 "memory_domains": [ 00:20:20.272 { 00:20:20.272 "dma_device_id": "system", 00:20:20.272 "dma_device_type": 1 00:20:20.272 }, 00:20:20.272 { 00:20:20.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.272 "dma_device_type": 2 00:20:20.272 } 00:20:20.272 ], 00:20:20.272 "driver_specific": {} 00:20:20.272 } 00:20:20.272 ] 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.272 BaseBdev3 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.272 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.272 [ 00:20:20.272 { 00:20:20.272 "name": "BaseBdev3", 00:20:20.272 "aliases": [ 00:20:20.272 "93d02093-ea75-4d33-9c6e-75cf8c917401" 00:20:20.272 ], 00:20:20.272 "product_name": "Malloc disk", 00:20:20.272 "block_size": 512, 00:20:20.272 "num_blocks": 65536, 00:20:20.272 "uuid": "93d02093-ea75-4d33-9c6e-75cf8c917401", 00:20:20.272 "assigned_rate_limits": { 00:20:20.272 "rw_ios_per_sec": 0, 00:20:20.272 "rw_mbytes_per_sec": 0, 00:20:20.272 "r_mbytes_per_sec": 0, 00:20:20.272 "w_mbytes_per_sec": 0 00:20:20.272 }, 00:20:20.272 "claimed": false, 00:20:20.272 "zoned": false, 00:20:20.272 "supported_io_types": { 00:20:20.272 "read": true, 00:20:20.272 "write": true, 00:20:20.272 "unmap": true, 00:20:20.273 "flush": true, 00:20:20.273 "reset": true, 00:20:20.273 "nvme_admin": false, 00:20:20.273 "nvme_io": false, 00:20:20.273 "nvme_io_md": false, 00:20:20.273 "write_zeroes": true, 00:20:20.273 "zcopy": true, 00:20:20.273 "get_zone_info": false, 00:20:20.273 "zone_management": false, 00:20:20.273 "zone_append": false, 00:20:20.273 "compare": false, 00:20:20.273 "compare_and_write": false, 00:20:20.273 "abort": true, 00:20:20.273 "seek_hole": false, 00:20:20.273 "seek_data": false, 00:20:20.273 "copy": true, 00:20:20.273 "nvme_iov_md": false 00:20:20.273 }, 00:20:20.273 "memory_domains": [ 00:20:20.273 { 00:20:20.273 "dma_device_id": "system", 00:20:20.273 "dma_device_type": 1 00:20:20.273 }, 00:20:20.273 { 00:20:20.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.273 "dma_device_type": 2 00:20:20.273 } 00:20:20.273 ], 00:20:20.273 "driver_specific": {} 00:20:20.273 } 00:20:20.273 ] 00:20:20.273 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.273 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:20.273 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:20.273 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:20.273 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:20.273 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.273 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.532 BaseBdev4 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.532 [ 00:20:20.532 { 00:20:20.532 "name": "BaseBdev4", 00:20:20.532 "aliases": [ 00:20:20.532 "68af495f-8543-435d-80c4-4bd953020dea" 00:20:20.532 ], 00:20:20.532 "product_name": "Malloc disk", 00:20:20.532 "block_size": 512, 00:20:20.532 "num_blocks": 65536, 00:20:20.532 "uuid": "68af495f-8543-435d-80c4-4bd953020dea", 00:20:20.532 "assigned_rate_limits": { 00:20:20.532 "rw_ios_per_sec": 0, 00:20:20.532 "rw_mbytes_per_sec": 0, 00:20:20.532 "r_mbytes_per_sec": 0, 00:20:20.532 "w_mbytes_per_sec": 0 00:20:20.532 }, 00:20:20.532 "claimed": false, 00:20:20.532 "zoned": false, 00:20:20.532 "supported_io_types": { 00:20:20.532 "read": true, 00:20:20.532 "write": true, 00:20:20.532 "unmap": true, 00:20:20.532 "flush": true, 00:20:20.532 "reset": true, 00:20:20.532 "nvme_admin": false, 00:20:20.532 "nvme_io": false, 00:20:20.532 "nvme_io_md": false, 00:20:20.532 "write_zeroes": true, 00:20:20.532 "zcopy": true, 00:20:20.532 "get_zone_info": false, 00:20:20.532 "zone_management": false, 00:20:20.532 "zone_append": false, 00:20:20.532 "compare": false, 00:20:20.532 "compare_and_write": false, 00:20:20.532 "abort": true, 00:20:20.532 "seek_hole": false, 00:20:20.532 "seek_data": false, 00:20:20.532 "copy": true, 00:20:20.532 "nvme_iov_md": false 00:20:20.532 }, 00:20:20.532 "memory_domains": [ 00:20:20.532 { 00:20:20.532 "dma_device_id": "system", 00:20:20.532 "dma_device_type": 1 00:20:20.532 }, 00:20:20.532 { 00:20:20.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.532 "dma_device_type": 2 00:20:20.532 } 00:20:20.532 ], 00:20:20.532 "driver_specific": {} 00:20:20.532 } 00:20:20.532 ] 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.532 [2024-11-27 14:19:51.295670] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:20.532 [2024-11-27 14:19:51.295837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:20.532 [2024-11-27 14:19:51.295910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:20.532 [2024-11-27 14:19:51.298166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:20.532 [2024-11-27 14:19:51.298289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.532 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.532 "name": "Existed_Raid", 00:20:20.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.532 "strip_size_kb": 64, 00:20:20.532 "state": "configuring", 00:20:20.532 "raid_level": "raid5f", 00:20:20.532 "superblock": false, 00:20:20.532 "num_base_bdevs": 4, 00:20:20.532 "num_base_bdevs_discovered": 3, 00:20:20.532 "num_base_bdevs_operational": 4, 00:20:20.532 "base_bdevs_list": [ 00:20:20.532 { 00:20:20.532 "name": "BaseBdev1", 00:20:20.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.532 "is_configured": false, 00:20:20.532 "data_offset": 0, 00:20:20.532 "data_size": 0 00:20:20.532 }, 00:20:20.532 { 00:20:20.532 "name": "BaseBdev2", 00:20:20.532 "uuid": "e612d901-ffc5-4f7b-9fd8-1dac2ec320f8", 00:20:20.532 "is_configured": true, 00:20:20.532 "data_offset": 0, 00:20:20.532 "data_size": 65536 00:20:20.532 }, 00:20:20.532 { 00:20:20.533 "name": "BaseBdev3", 00:20:20.533 "uuid": "93d02093-ea75-4d33-9c6e-75cf8c917401", 00:20:20.533 "is_configured": true, 00:20:20.533 "data_offset": 0, 00:20:20.533 "data_size": 65536 00:20:20.533 }, 00:20:20.533 { 00:20:20.533 "name": "BaseBdev4", 00:20:20.533 "uuid": "68af495f-8543-435d-80c4-4bd953020dea", 00:20:20.533 "is_configured": true, 00:20:20.533 "data_offset": 0, 00:20:20.533 "data_size": 65536 00:20:20.533 } 00:20:20.533 ] 00:20:20.533 }' 00:20:20.533 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.533 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.100 [2024-11-27 14:19:51.786802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.100 "name": "Existed_Raid", 00:20:21.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.100 "strip_size_kb": 64, 00:20:21.100 "state": "configuring", 00:20:21.100 "raid_level": "raid5f", 00:20:21.100 "superblock": false, 00:20:21.100 "num_base_bdevs": 4, 00:20:21.100 "num_base_bdevs_discovered": 2, 00:20:21.100 "num_base_bdevs_operational": 4, 00:20:21.100 "base_bdevs_list": [ 00:20:21.100 { 00:20:21.100 "name": "BaseBdev1", 00:20:21.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.100 "is_configured": false, 00:20:21.100 "data_offset": 0, 00:20:21.100 "data_size": 0 00:20:21.100 }, 00:20:21.100 { 00:20:21.100 "name": null, 00:20:21.100 "uuid": "e612d901-ffc5-4f7b-9fd8-1dac2ec320f8", 00:20:21.100 "is_configured": false, 00:20:21.100 "data_offset": 0, 00:20:21.100 "data_size": 65536 00:20:21.100 }, 00:20:21.100 { 00:20:21.100 "name": "BaseBdev3", 00:20:21.100 "uuid": "93d02093-ea75-4d33-9c6e-75cf8c917401", 00:20:21.100 "is_configured": true, 00:20:21.100 "data_offset": 0, 00:20:21.100 "data_size": 65536 00:20:21.100 }, 00:20:21.100 { 00:20:21.100 "name": "BaseBdev4", 00:20:21.100 "uuid": "68af495f-8543-435d-80c4-4bd953020dea", 00:20:21.100 "is_configured": true, 00:20:21.100 "data_offset": 0, 00:20:21.100 "data_size": 65536 00:20:21.100 } 00:20:21.100 ] 00:20:21.100 }' 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.100 14:19:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.358 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:21.358 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.358 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.358 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.358 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.358 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:21.358 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:21.358 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.358 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.617 [2024-11-27 14:19:52.340643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:21.617 BaseBdev1 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.617 [ 00:20:21.617 { 00:20:21.617 "name": "BaseBdev1", 00:20:21.617 "aliases": [ 00:20:21.617 "e8292da8-6e6a-405f-b9d0-583eadca57fa" 00:20:21.617 ], 00:20:21.617 "product_name": "Malloc disk", 00:20:21.617 "block_size": 512, 00:20:21.617 "num_blocks": 65536, 00:20:21.617 "uuid": "e8292da8-6e6a-405f-b9d0-583eadca57fa", 00:20:21.617 "assigned_rate_limits": { 00:20:21.617 "rw_ios_per_sec": 0, 00:20:21.617 "rw_mbytes_per_sec": 0, 00:20:21.617 "r_mbytes_per_sec": 0, 00:20:21.617 "w_mbytes_per_sec": 0 00:20:21.617 }, 00:20:21.617 "claimed": true, 00:20:21.617 "claim_type": "exclusive_write", 00:20:21.617 "zoned": false, 00:20:21.617 "supported_io_types": { 00:20:21.617 "read": true, 00:20:21.617 "write": true, 00:20:21.617 "unmap": true, 00:20:21.617 "flush": true, 00:20:21.617 "reset": true, 00:20:21.617 "nvme_admin": false, 00:20:21.617 "nvme_io": false, 00:20:21.617 "nvme_io_md": false, 00:20:21.617 "write_zeroes": true, 00:20:21.617 "zcopy": true, 00:20:21.617 "get_zone_info": false, 00:20:21.617 "zone_management": false, 00:20:21.617 "zone_append": false, 00:20:21.617 "compare": false, 00:20:21.617 "compare_and_write": false, 00:20:21.617 "abort": true, 00:20:21.617 "seek_hole": false, 00:20:21.617 "seek_data": false, 00:20:21.617 "copy": true, 00:20:21.617 "nvme_iov_md": false 00:20:21.617 }, 00:20:21.617 "memory_domains": [ 00:20:21.617 { 00:20:21.617 "dma_device_id": "system", 00:20:21.617 "dma_device_type": 1 00:20:21.617 }, 00:20:21.617 { 00:20:21.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.617 "dma_device_type": 2 00:20:21.617 } 00:20:21.617 ], 00:20:21.617 "driver_specific": {} 00:20:21.617 } 00:20:21.617 ] 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.617 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.618 "name": "Existed_Raid", 00:20:21.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.618 "strip_size_kb": 64, 00:20:21.618 "state": "configuring", 00:20:21.618 "raid_level": "raid5f", 00:20:21.618 "superblock": false, 00:20:21.618 "num_base_bdevs": 4, 00:20:21.618 "num_base_bdevs_discovered": 3, 00:20:21.618 "num_base_bdevs_operational": 4, 00:20:21.618 "base_bdevs_list": [ 00:20:21.618 { 00:20:21.618 "name": "BaseBdev1", 00:20:21.618 "uuid": "e8292da8-6e6a-405f-b9d0-583eadca57fa", 00:20:21.618 "is_configured": true, 00:20:21.618 "data_offset": 0, 00:20:21.618 "data_size": 65536 00:20:21.618 }, 00:20:21.618 { 00:20:21.618 "name": null, 00:20:21.618 "uuid": "e612d901-ffc5-4f7b-9fd8-1dac2ec320f8", 00:20:21.618 "is_configured": false, 00:20:21.618 "data_offset": 0, 00:20:21.618 "data_size": 65536 00:20:21.618 }, 00:20:21.618 { 00:20:21.618 "name": "BaseBdev3", 00:20:21.618 "uuid": "93d02093-ea75-4d33-9c6e-75cf8c917401", 00:20:21.618 "is_configured": true, 00:20:21.618 "data_offset": 0, 00:20:21.618 "data_size": 65536 00:20:21.618 }, 00:20:21.618 { 00:20:21.618 "name": "BaseBdev4", 00:20:21.618 "uuid": "68af495f-8543-435d-80c4-4bd953020dea", 00:20:21.618 "is_configured": true, 00:20:21.618 "data_offset": 0, 00:20:21.618 "data_size": 65536 00:20:21.618 } 00:20:21.618 ] 00:20:21.618 }' 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.618 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.186 [2024-11-27 14:19:52.884094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.186 "name": "Existed_Raid", 00:20:22.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.186 "strip_size_kb": 64, 00:20:22.186 "state": "configuring", 00:20:22.186 "raid_level": "raid5f", 00:20:22.186 "superblock": false, 00:20:22.186 "num_base_bdevs": 4, 00:20:22.186 "num_base_bdevs_discovered": 2, 00:20:22.186 "num_base_bdevs_operational": 4, 00:20:22.186 "base_bdevs_list": [ 00:20:22.186 { 00:20:22.186 "name": "BaseBdev1", 00:20:22.186 "uuid": "e8292da8-6e6a-405f-b9d0-583eadca57fa", 00:20:22.186 "is_configured": true, 00:20:22.186 "data_offset": 0, 00:20:22.186 "data_size": 65536 00:20:22.186 }, 00:20:22.186 { 00:20:22.186 "name": null, 00:20:22.186 "uuid": "e612d901-ffc5-4f7b-9fd8-1dac2ec320f8", 00:20:22.186 "is_configured": false, 00:20:22.186 "data_offset": 0, 00:20:22.186 "data_size": 65536 00:20:22.186 }, 00:20:22.186 { 00:20:22.186 "name": null, 00:20:22.186 "uuid": "93d02093-ea75-4d33-9c6e-75cf8c917401", 00:20:22.186 "is_configured": false, 00:20:22.186 "data_offset": 0, 00:20:22.186 "data_size": 65536 00:20:22.186 }, 00:20:22.186 { 00:20:22.186 "name": "BaseBdev4", 00:20:22.186 "uuid": "68af495f-8543-435d-80c4-4bd953020dea", 00:20:22.186 "is_configured": true, 00:20:22.186 "data_offset": 0, 00:20:22.186 "data_size": 65536 00:20:22.186 } 00:20:22.186 ] 00:20:22.186 }' 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.186 14:19:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.445 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.445 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.445 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.445 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:22.445 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.445 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:22.445 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:22.445 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.445 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.703 [2024-11-27 14:19:53.400067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:22.703 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.703 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.704 "name": "Existed_Raid", 00:20:22.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.704 "strip_size_kb": 64, 00:20:22.704 "state": "configuring", 00:20:22.704 "raid_level": "raid5f", 00:20:22.704 "superblock": false, 00:20:22.704 "num_base_bdevs": 4, 00:20:22.704 "num_base_bdevs_discovered": 3, 00:20:22.704 "num_base_bdevs_operational": 4, 00:20:22.704 "base_bdevs_list": [ 00:20:22.704 { 00:20:22.704 "name": "BaseBdev1", 00:20:22.704 "uuid": "e8292da8-6e6a-405f-b9d0-583eadca57fa", 00:20:22.704 "is_configured": true, 00:20:22.704 "data_offset": 0, 00:20:22.704 "data_size": 65536 00:20:22.704 }, 00:20:22.704 { 00:20:22.704 "name": null, 00:20:22.704 "uuid": "e612d901-ffc5-4f7b-9fd8-1dac2ec320f8", 00:20:22.704 "is_configured": false, 00:20:22.704 "data_offset": 0, 00:20:22.704 "data_size": 65536 00:20:22.704 }, 00:20:22.704 { 00:20:22.704 "name": "BaseBdev3", 00:20:22.704 "uuid": "93d02093-ea75-4d33-9c6e-75cf8c917401", 00:20:22.704 "is_configured": true, 00:20:22.704 "data_offset": 0, 00:20:22.704 "data_size": 65536 00:20:22.704 }, 00:20:22.704 { 00:20:22.704 "name": "BaseBdev4", 00:20:22.704 "uuid": "68af495f-8543-435d-80c4-4bd953020dea", 00:20:22.704 "is_configured": true, 00:20:22.704 "data_offset": 0, 00:20:22.704 "data_size": 65536 00:20:22.704 } 00:20:22.704 ] 00:20:22.704 }' 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.704 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.962 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.962 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.962 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.962 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:22.962 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.221 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:23.221 14:19:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:23.221 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.221 14:19:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.221 [2024-11-27 14:19:53.932139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.221 "name": "Existed_Raid", 00:20:23.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.221 "strip_size_kb": 64, 00:20:23.221 "state": "configuring", 00:20:23.221 "raid_level": "raid5f", 00:20:23.221 "superblock": false, 00:20:23.221 "num_base_bdevs": 4, 00:20:23.221 "num_base_bdevs_discovered": 2, 00:20:23.221 "num_base_bdevs_operational": 4, 00:20:23.221 "base_bdevs_list": [ 00:20:23.221 { 00:20:23.221 "name": null, 00:20:23.221 "uuid": "e8292da8-6e6a-405f-b9d0-583eadca57fa", 00:20:23.221 "is_configured": false, 00:20:23.221 "data_offset": 0, 00:20:23.221 "data_size": 65536 00:20:23.221 }, 00:20:23.221 { 00:20:23.221 "name": null, 00:20:23.221 "uuid": "e612d901-ffc5-4f7b-9fd8-1dac2ec320f8", 00:20:23.221 "is_configured": false, 00:20:23.221 "data_offset": 0, 00:20:23.221 "data_size": 65536 00:20:23.221 }, 00:20:23.221 { 00:20:23.221 "name": "BaseBdev3", 00:20:23.221 "uuid": "93d02093-ea75-4d33-9c6e-75cf8c917401", 00:20:23.221 "is_configured": true, 00:20:23.221 "data_offset": 0, 00:20:23.221 "data_size": 65536 00:20:23.221 }, 00:20:23.221 { 00:20:23.221 "name": "BaseBdev4", 00:20:23.221 "uuid": "68af495f-8543-435d-80c4-4bd953020dea", 00:20:23.221 "is_configured": true, 00:20:23.221 "data_offset": 0, 00:20:23.221 "data_size": 65536 00:20:23.221 } 00:20:23.221 ] 00:20:23.221 }' 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.221 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.790 [2024-11-27 14:19:54.504649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.790 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.790 "name": "Existed_Raid", 00:20:23.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.790 "strip_size_kb": 64, 00:20:23.790 "state": "configuring", 00:20:23.790 "raid_level": "raid5f", 00:20:23.790 "superblock": false, 00:20:23.790 "num_base_bdevs": 4, 00:20:23.790 "num_base_bdevs_discovered": 3, 00:20:23.790 "num_base_bdevs_operational": 4, 00:20:23.790 "base_bdevs_list": [ 00:20:23.790 { 00:20:23.790 "name": null, 00:20:23.790 "uuid": "e8292da8-6e6a-405f-b9d0-583eadca57fa", 00:20:23.790 "is_configured": false, 00:20:23.790 "data_offset": 0, 00:20:23.790 "data_size": 65536 00:20:23.790 }, 00:20:23.790 { 00:20:23.790 "name": "BaseBdev2", 00:20:23.790 "uuid": "e612d901-ffc5-4f7b-9fd8-1dac2ec320f8", 00:20:23.790 "is_configured": true, 00:20:23.790 "data_offset": 0, 00:20:23.790 "data_size": 65536 00:20:23.790 }, 00:20:23.790 { 00:20:23.790 "name": "BaseBdev3", 00:20:23.790 "uuid": "93d02093-ea75-4d33-9c6e-75cf8c917401", 00:20:23.790 "is_configured": true, 00:20:23.790 "data_offset": 0, 00:20:23.790 "data_size": 65536 00:20:23.790 }, 00:20:23.790 { 00:20:23.790 "name": "BaseBdev4", 00:20:23.790 "uuid": "68af495f-8543-435d-80c4-4bd953020dea", 00:20:23.790 "is_configured": true, 00:20:23.790 "data_offset": 0, 00:20:23.790 "data_size": 65536 00:20:23.790 } 00:20:23.790 ] 00:20:23.791 }' 00:20:23.791 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.791 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.049 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.049 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.049 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.049 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:24.049 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.049 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:24.049 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:24.049 14:19:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.049 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.049 14:19:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.335 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.335 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e8292da8-6e6a-405f-b9d0-583eadca57fa 00:20:24.335 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.335 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.335 [2024-11-27 14:19:55.080121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:24.335 [2024-11-27 14:19:55.080221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:24.335 [2024-11-27 14:19:55.080230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:24.335 [2024-11-27 14:19:55.080520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:24.335 [2024-11-27 14:19:55.088640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:24.335 [2024-11-27 14:19:55.088683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:24.335 [2024-11-27 14:19:55.089027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.335 NewBaseBdev 00:20:24.335 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.335 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:24.335 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:24.335 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.336 [ 00:20:24.336 { 00:20:24.336 "name": "NewBaseBdev", 00:20:24.336 "aliases": [ 00:20:24.336 "e8292da8-6e6a-405f-b9d0-583eadca57fa" 00:20:24.336 ], 00:20:24.336 "product_name": "Malloc disk", 00:20:24.336 "block_size": 512, 00:20:24.336 "num_blocks": 65536, 00:20:24.336 "uuid": "e8292da8-6e6a-405f-b9d0-583eadca57fa", 00:20:24.336 "assigned_rate_limits": { 00:20:24.336 "rw_ios_per_sec": 0, 00:20:24.336 "rw_mbytes_per_sec": 0, 00:20:24.336 "r_mbytes_per_sec": 0, 00:20:24.336 "w_mbytes_per_sec": 0 00:20:24.336 }, 00:20:24.336 "claimed": true, 00:20:24.336 "claim_type": "exclusive_write", 00:20:24.336 "zoned": false, 00:20:24.336 "supported_io_types": { 00:20:24.336 "read": true, 00:20:24.336 "write": true, 00:20:24.336 "unmap": true, 00:20:24.336 "flush": true, 00:20:24.336 "reset": true, 00:20:24.336 "nvme_admin": false, 00:20:24.336 "nvme_io": false, 00:20:24.336 "nvme_io_md": false, 00:20:24.336 "write_zeroes": true, 00:20:24.336 "zcopy": true, 00:20:24.336 "get_zone_info": false, 00:20:24.336 "zone_management": false, 00:20:24.336 "zone_append": false, 00:20:24.336 "compare": false, 00:20:24.336 "compare_and_write": false, 00:20:24.336 "abort": true, 00:20:24.336 "seek_hole": false, 00:20:24.336 "seek_data": false, 00:20:24.336 "copy": true, 00:20:24.336 "nvme_iov_md": false 00:20:24.336 }, 00:20:24.336 "memory_domains": [ 00:20:24.336 { 00:20:24.336 "dma_device_id": "system", 00:20:24.336 "dma_device_type": 1 00:20:24.336 }, 00:20:24.336 { 00:20:24.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.336 "dma_device_type": 2 00:20:24.336 } 00:20:24.336 ], 00:20:24.336 "driver_specific": {} 00:20:24.336 } 00:20:24.336 ] 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.336 "name": "Existed_Raid", 00:20:24.336 "uuid": "6de8ecd2-f4f1-480e-aa42-85e198c0a075", 00:20:24.336 "strip_size_kb": 64, 00:20:24.336 "state": "online", 00:20:24.336 "raid_level": "raid5f", 00:20:24.336 "superblock": false, 00:20:24.336 "num_base_bdevs": 4, 00:20:24.336 "num_base_bdevs_discovered": 4, 00:20:24.336 "num_base_bdevs_operational": 4, 00:20:24.336 "base_bdevs_list": [ 00:20:24.336 { 00:20:24.336 "name": "NewBaseBdev", 00:20:24.336 "uuid": "e8292da8-6e6a-405f-b9d0-583eadca57fa", 00:20:24.336 "is_configured": true, 00:20:24.336 "data_offset": 0, 00:20:24.336 "data_size": 65536 00:20:24.336 }, 00:20:24.336 { 00:20:24.336 "name": "BaseBdev2", 00:20:24.336 "uuid": "e612d901-ffc5-4f7b-9fd8-1dac2ec320f8", 00:20:24.336 "is_configured": true, 00:20:24.336 "data_offset": 0, 00:20:24.336 "data_size": 65536 00:20:24.336 }, 00:20:24.336 { 00:20:24.336 "name": "BaseBdev3", 00:20:24.336 "uuid": "93d02093-ea75-4d33-9c6e-75cf8c917401", 00:20:24.336 "is_configured": true, 00:20:24.336 "data_offset": 0, 00:20:24.336 "data_size": 65536 00:20:24.336 }, 00:20:24.336 { 00:20:24.336 "name": "BaseBdev4", 00:20:24.336 "uuid": "68af495f-8543-435d-80c4-4bd953020dea", 00:20:24.336 "is_configured": true, 00:20:24.336 "data_offset": 0, 00:20:24.336 "data_size": 65536 00:20:24.336 } 00:20:24.336 ] 00:20:24.336 }' 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.336 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:24.911 [2024-11-27 14:19:55.597998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.911 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:24.911 "name": "Existed_Raid", 00:20:24.911 "aliases": [ 00:20:24.911 "6de8ecd2-f4f1-480e-aa42-85e198c0a075" 00:20:24.911 ], 00:20:24.911 "product_name": "Raid Volume", 00:20:24.911 "block_size": 512, 00:20:24.911 "num_blocks": 196608, 00:20:24.911 "uuid": "6de8ecd2-f4f1-480e-aa42-85e198c0a075", 00:20:24.911 "assigned_rate_limits": { 00:20:24.911 "rw_ios_per_sec": 0, 00:20:24.911 "rw_mbytes_per_sec": 0, 00:20:24.911 "r_mbytes_per_sec": 0, 00:20:24.911 "w_mbytes_per_sec": 0 00:20:24.911 }, 00:20:24.911 "claimed": false, 00:20:24.911 "zoned": false, 00:20:24.911 "supported_io_types": { 00:20:24.911 "read": true, 00:20:24.911 "write": true, 00:20:24.911 "unmap": false, 00:20:24.911 "flush": false, 00:20:24.911 "reset": true, 00:20:24.911 "nvme_admin": false, 00:20:24.911 "nvme_io": false, 00:20:24.911 "nvme_io_md": false, 00:20:24.911 "write_zeroes": true, 00:20:24.911 "zcopy": false, 00:20:24.967 "get_zone_info": false, 00:20:24.967 "zone_management": false, 00:20:24.967 "zone_append": false, 00:20:24.967 "compare": false, 00:20:24.967 "compare_and_write": false, 00:20:24.967 "abort": false, 00:20:24.967 "seek_hole": false, 00:20:24.967 "seek_data": false, 00:20:24.967 "copy": false, 00:20:24.967 "nvme_iov_md": false 00:20:24.967 }, 00:20:24.967 "driver_specific": { 00:20:24.967 "raid": { 00:20:24.967 "uuid": "6de8ecd2-f4f1-480e-aa42-85e198c0a075", 00:20:24.967 "strip_size_kb": 64, 00:20:24.967 "state": "online", 00:20:24.967 "raid_level": "raid5f", 00:20:24.967 "superblock": false, 00:20:24.967 "num_base_bdevs": 4, 00:20:24.967 "num_base_bdevs_discovered": 4, 00:20:24.967 "num_base_bdevs_operational": 4, 00:20:24.967 "base_bdevs_list": [ 00:20:24.967 { 00:20:24.967 "name": "NewBaseBdev", 00:20:24.967 "uuid": "e8292da8-6e6a-405f-b9d0-583eadca57fa", 00:20:24.967 "is_configured": true, 00:20:24.967 "data_offset": 0, 00:20:24.967 "data_size": 65536 00:20:24.967 }, 00:20:24.967 { 00:20:24.967 "name": "BaseBdev2", 00:20:24.967 "uuid": "e612d901-ffc5-4f7b-9fd8-1dac2ec320f8", 00:20:24.967 "is_configured": true, 00:20:24.967 "data_offset": 0, 00:20:24.967 "data_size": 65536 00:20:24.967 }, 00:20:24.967 { 00:20:24.967 "name": "BaseBdev3", 00:20:24.967 "uuid": "93d02093-ea75-4d33-9c6e-75cf8c917401", 00:20:24.967 "is_configured": true, 00:20:24.967 "data_offset": 0, 00:20:24.967 "data_size": 65536 00:20:24.967 }, 00:20:24.967 { 00:20:24.968 "name": "BaseBdev4", 00:20:24.968 "uuid": "68af495f-8543-435d-80c4-4bd953020dea", 00:20:24.968 "is_configured": true, 00:20:24.968 "data_offset": 0, 00:20:24.968 "data_size": 65536 00:20:24.968 } 00:20:24.968 ] 00:20:24.968 } 00:20:24.968 } 00:20:24.968 }' 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:24.968 BaseBdev2 00:20:24.968 BaseBdev3 00:20:24.968 BaseBdev4' 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.968 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.227 [2024-11-27 14:19:55.917284] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:25.227 [2024-11-27 14:19:55.917325] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:25.227 [2024-11-27 14:19:55.917429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.227 [2024-11-27 14:19:55.917784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.227 [2024-11-27 14:19:55.917810] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83009 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83009 ']' 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83009 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83009 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.227 killing process with pid 83009 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83009' 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83009 00:20:25.227 [2024-11-27 14:19:55.963696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:25.227 14:19:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83009 00:20:25.794 [2024-11-27 14:19:56.442460] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:27.171 00:20:27.171 real 0m12.254s 00:20:27.171 user 0m19.272s 00:20:27.171 sys 0m2.063s 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.171 ************************************ 00:20:27.171 END TEST raid5f_state_function_test 00:20:27.171 ************************************ 00:20:27.171 14:19:57 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:20:27.171 14:19:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:27.171 14:19:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.171 14:19:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:27.171 ************************************ 00:20:27.171 START TEST raid5f_state_function_test_sb 00:20:27.171 ************************************ 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:27.171 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:27.172 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:27.172 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83686 00:20:27.172 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:27.172 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83686' 00:20:27.172 Process raid pid: 83686 00:20:27.172 14:19:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83686 00:20:27.172 14:19:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83686 ']' 00:20:27.172 14:19:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.172 14:19:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.172 14:19:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.172 14:19:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.172 14:19:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.172 [2024-11-27 14:19:57.877455] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:27.172 [2024-11-27 14:19:57.877690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.172 [2024-11-27 14:19:58.058886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.430 [2024-11-27 14:19:58.185490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.689 [2024-11-27 14:19:58.399101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.689 [2024-11-27 14:19:58.399254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.948 [2024-11-27 14:19:58.757751] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:27.948 [2024-11-27 14:19:58.757822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:27.948 [2024-11-27 14:19:58.757836] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:27.948 [2024-11-27 14:19:58.757847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:27.948 [2024-11-27 14:19:58.757854] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:27.948 [2024-11-27 14:19:58.757864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:27.948 [2024-11-27 14:19:58.757871] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:27.948 [2024-11-27 14:19:58.757881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.948 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.948 "name": "Existed_Raid", 00:20:27.948 "uuid": "88422f3a-a9c8-4e95-b581-ff01764230d8", 00:20:27.948 "strip_size_kb": 64, 00:20:27.948 "state": "configuring", 00:20:27.948 "raid_level": "raid5f", 00:20:27.948 "superblock": true, 00:20:27.948 "num_base_bdevs": 4, 00:20:27.949 "num_base_bdevs_discovered": 0, 00:20:27.949 "num_base_bdevs_operational": 4, 00:20:27.949 "base_bdevs_list": [ 00:20:27.949 { 00:20:27.949 "name": "BaseBdev1", 00:20:27.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.949 "is_configured": false, 00:20:27.949 "data_offset": 0, 00:20:27.949 "data_size": 0 00:20:27.949 }, 00:20:27.949 { 00:20:27.949 "name": "BaseBdev2", 00:20:27.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.949 "is_configured": false, 00:20:27.949 "data_offset": 0, 00:20:27.949 "data_size": 0 00:20:27.949 }, 00:20:27.949 { 00:20:27.949 "name": "BaseBdev3", 00:20:27.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.949 "is_configured": false, 00:20:27.949 "data_offset": 0, 00:20:27.949 "data_size": 0 00:20:27.949 }, 00:20:27.949 { 00:20:27.949 "name": "BaseBdev4", 00:20:27.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.949 "is_configured": false, 00:20:27.949 "data_offset": 0, 00:20:27.949 "data_size": 0 00:20:27.949 } 00:20:27.949 ] 00:20:27.949 }' 00:20:27.949 14:19:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.949 14:19:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.518 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:28.518 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.518 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.518 [2024-11-27 14:19:59.184983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:28.518 [2024-11-27 14:19:59.185107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:28.518 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.518 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:28.518 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.518 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.519 [2024-11-27 14:19:59.192985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:28.519 [2024-11-27 14:19:59.193092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:28.519 [2024-11-27 14:19:59.193147] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:28.519 [2024-11-27 14:19:59.193178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:28.519 [2024-11-27 14:19:59.193199] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:28.519 [2024-11-27 14:19:59.193241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:28.519 [2024-11-27 14:19:59.193285] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:28.519 [2024-11-27 14:19:59.193336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.519 [2024-11-27 14:19:59.242015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.519 BaseBdev1 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.519 [ 00:20:28.519 { 00:20:28.519 "name": "BaseBdev1", 00:20:28.519 "aliases": [ 00:20:28.519 "feadfa7c-0b35-46b4-b1f8-f7844e10f19e" 00:20:28.519 ], 00:20:28.519 "product_name": "Malloc disk", 00:20:28.519 "block_size": 512, 00:20:28.519 "num_blocks": 65536, 00:20:28.519 "uuid": "feadfa7c-0b35-46b4-b1f8-f7844e10f19e", 00:20:28.519 "assigned_rate_limits": { 00:20:28.519 "rw_ios_per_sec": 0, 00:20:28.519 "rw_mbytes_per_sec": 0, 00:20:28.519 "r_mbytes_per_sec": 0, 00:20:28.519 "w_mbytes_per_sec": 0 00:20:28.519 }, 00:20:28.519 "claimed": true, 00:20:28.519 "claim_type": "exclusive_write", 00:20:28.519 "zoned": false, 00:20:28.519 "supported_io_types": { 00:20:28.519 "read": true, 00:20:28.519 "write": true, 00:20:28.519 "unmap": true, 00:20:28.519 "flush": true, 00:20:28.519 "reset": true, 00:20:28.519 "nvme_admin": false, 00:20:28.519 "nvme_io": false, 00:20:28.519 "nvme_io_md": false, 00:20:28.519 "write_zeroes": true, 00:20:28.519 "zcopy": true, 00:20:28.519 "get_zone_info": false, 00:20:28.519 "zone_management": false, 00:20:28.519 "zone_append": false, 00:20:28.519 "compare": false, 00:20:28.519 "compare_and_write": false, 00:20:28.519 "abort": true, 00:20:28.519 "seek_hole": false, 00:20:28.519 "seek_data": false, 00:20:28.519 "copy": true, 00:20:28.519 "nvme_iov_md": false 00:20:28.519 }, 00:20:28.519 "memory_domains": [ 00:20:28.519 { 00:20:28.519 "dma_device_id": "system", 00:20:28.519 "dma_device_type": 1 00:20:28.519 }, 00:20:28.519 { 00:20:28.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.519 "dma_device_type": 2 00:20:28.519 } 00:20:28.519 ], 00:20:28.519 "driver_specific": {} 00:20:28.519 } 00:20:28.519 ] 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.519 "name": "Existed_Raid", 00:20:28.519 "uuid": "5eb16c54-8361-4477-a109-ab54ad47da2c", 00:20:28.519 "strip_size_kb": 64, 00:20:28.519 "state": "configuring", 00:20:28.519 "raid_level": "raid5f", 00:20:28.519 "superblock": true, 00:20:28.519 "num_base_bdevs": 4, 00:20:28.519 "num_base_bdevs_discovered": 1, 00:20:28.519 "num_base_bdevs_operational": 4, 00:20:28.519 "base_bdevs_list": [ 00:20:28.519 { 00:20:28.519 "name": "BaseBdev1", 00:20:28.519 "uuid": "feadfa7c-0b35-46b4-b1f8-f7844e10f19e", 00:20:28.519 "is_configured": true, 00:20:28.519 "data_offset": 2048, 00:20:28.519 "data_size": 63488 00:20:28.519 }, 00:20:28.519 { 00:20:28.519 "name": "BaseBdev2", 00:20:28.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.519 "is_configured": false, 00:20:28.519 "data_offset": 0, 00:20:28.519 "data_size": 0 00:20:28.519 }, 00:20:28.519 { 00:20:28.519 "name": "BaseBdev3", 00:20:28.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.519 "is_configured": false, 00:20:28.519 "data_offset": 0, 00:20:28.519 "data_size": 0 00:20:28.519 }, 00:20:28.519 { 00:20:28.519 "name": "BaseBdev4", 00:20:28.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.519 "is_configured": false, 00:20:28.519 "data_offset": 0, 00:20:28.519 "data_size": 0 00:20:28.519 } 00:20:28.519 ] 00:20:28.519 }' 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.519 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.086 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:29.086 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.086 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.086 [2024-11-27 14:19:59.785184] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:29.086 [2024-11-27 14:19:59.785251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.087 [2024-11-27 14:19:59.797249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:29.087 [2024-11-27 14:19:59.799282] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:29.087 [2024-11-27 14:19:59.799335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:29.087 [2024-11-27 14:19:59.799346] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:29.087 [2024-11-27 14:19:59.799358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:29.087 [2024-11-27 14:19:59.799366] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:29.087 [2024-11-27 14:19:59.799376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.087 "name": "Existed_Raid", 00:20:29.087 "uuid": "a1dde01d-60e4-41be-b467-00c15c0181f5", 00:20:29.087 "strip_size_kb": 64, 00:20:29.087 "state": "configuring", 00:20:29.087 "raid_level": "raid5f", 00:20:29.087 "superblock": true, 00:20:29.087 "num_base_bdevs": 4, 00:20:29.087 "num_base_bdevs_discovered": 1, 00:20:29.087 "num_base_bdevs_operational": 4, 00:20:29.087 "base_bdevs_list": [ 00:20:29.087 { 00:20:29.087 "name": "BaseBdev1", 00:20:29.087 "uuid": "feadfa7c-0b35-46b4-b1f8-f7844e10f19e", 00:20:29.087 "is_configured": true, 00:20:29.087 "data_offset": 2048, 00:20:29.087 "data_size": 63488 00:20:29.087 }, 00:20:29.087 { 00:20:29.087 "name": "BaseBdev2", 00:20:29.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.087 "is_configured": false, 00:20:29.087 "data_offset": 0, 00:20:29.087 "data_size": 0 00:20:29.087 }, 00:20:29.087 { 00:20:29.087 "name": "BaseBdev3", 00:20:29.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.087 "is_configured": false, 00:20:29.087 "data_offset": 0, 00:20:29.087 "data_size": 0 00:20:29.087 }, 00:20:29.087 { 00:20:29.087 "name": "BaseBdev4", 00:20:29.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.087 "is_configured": false, 00:20:29.087 "data_offset": 0, 00:20:29.087 "data_size": 0 00:20:29.087 } 00:20:29.087 ] 00:20:29.087 }' 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.087 14:19:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.346 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:29.346 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.346 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.605 [2024-11-27 14:20:00.321881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:29.605 BaseBdev2 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.605 [ 00:20:29.605 { 00:20:29.605 "name": "BaseBdev2", 00:20:29.605 "aliases": [ 00:20:29.605 "f980df5b-660f-4374-a5af-a6ed223636fd" 00:20:29.605 ], 00:20:29.605 "product_name": "Malloc disk", 00:20:29.605 "block_size": 512, 00:20:29.605 "num_blocks": 65536, 00:20:29.605 "uuid": "f980df5b-660f-4374-a5af-a6ed223636fd", 00:20:29.605 "assigned_rate_limits": { 00:20:29.605 "rw_ios_per_sec": 0, 00:20:29.605 "rw_mbytes_per_sec": 0, 00:20:29.605 "r_mbytes_per_sec": 0, 00:20:29.605 "w_mbytes_per_sec": 0 00:20:29.605 }, 00:20:29.605 "claimed": true, 00:20:29.605 "claim_type": "exclusive_write", 00:20:29.605 "zoned": false, 00:20:29.605 "supported_io_types": { 00:20:29.605 "read": true, 00:20:29.605 "write": true, 00:20:29.605 "unmap": true, 00:20:29.605 "flush": true, 00:20:29.605 "reset": true, 00:20:29.605 "nvme_admin": false, 00:20:29.605 "nvme_io": false, 00:20:29.605 "nvme_io_md": false, 00:20:29.605 "write_zeroes": true, 00:20:29.605 "zcopy": true, 00:20:29.605 "get_zone_info": false, 00:20:29.605 "zone_management": false, 00:20:29.605 "zone_append": false, 00:20:29.605 "compare": false, 00:20:29.605 "compare_and_write": false, 00:20:29.605 "abort": true, 00:20:29.605 "seek_hole": false, 00:20:29.605 "seek_data": false, 00:20:29.605 "copy": true, 00:20:29.605 "nvme_iov_md": false 00:20:29.605 }, 00:20:29.605 "memory_domains": [ 00:20:29.605 { 00:20:29.605 "dma_device_id": "system", 00:20:29.605 "dma_device_type": 1 00:20:29.605 }, 00:20:29.605 { 00:20:29.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.605 "dma_device_type": 2 00:20:29.605 } 00:20:29.605 ], 00:20:29.605 "driver_specific": {} 00:20:29.605 } 00:20:29.605 ] 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.605 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.606 "name": "Existed_Raid", 00:20:29.606 "uuid": "a1dde01d-60e4-41be-b467-00c15c0181f5", 00:20:29.606 "strip_size_kb": 64, 00:20:29.606 "state": "configuring", 00:20:29.606 "raid_level": "raid5f", 00:20:29.606 "superblock": true, 00:20:29.606 "num_base_bdevs": 4, 00:20:29.606 "num_base_bdevs_discovered": 2, 00:20:29.606 "num_base_bdevs_operational": 4, 00:20:29.606 "base_bdevs_list": [ 00:20:29.606 { 00:20:29.606 "name": "BaseBdev1", 00:20:29.606 "uuid": "feadfa7c-0b35-46b4-b1f8-f7844e10f19e", 00:20:29.606 "is_configured": true, 00:20:29.606 "data_offset": 2048, 00:20:29.606 "data_size": 63488 00:20:29.606 }, 00:20:29.606 { 00:20:29.606 "name": "BaseBdev2", 00:20:29.606 "uuid": "f980df5b-660f-4374-a5af-a6ed223636fd", 00:20:29.606 "is_configured": true, 00:20:29.606 "data_offset": 2048, 00:20:29.606 "data_size": 63488 00:20:29.606 }, 00:20:29.606 { 00:20:29.606 "name": "BaseBdev3", 00:20:29.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.606 "is_configured": false, 00:20:29.606 "data_offset": 0, 00:20:29.606 "data_size": 0 00:20:29.606 }, 00:20:29.606 { 00:20:29.606 "name": "BaseBdev4", 00:20:29.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.606 "is_configured": false, 00:20:29.606 "data_offset": 0, 00:20:29.606 "data_size": 0 00:20:29.606 } 00:20:29.606 ] 00:20:29.606 }' 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.606 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.174 [2024-11-27 14:20:00.897373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:30.174 BaseBdev3 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.174 [ 00:20:30.174 { 00:20:30.174 "name": "BaseBdev3", 00:20:30.174 "aliases": [ 00:20:30.174 "f23074a4-7a0e-4d3b-86ac-f24d7ed57793" 00:20:30.174 ], 00:20:30.174 "product_name": "Malloc disk", 00:20:30.174 "block_size": 512, 00:20:30.174 "num_blocks": 65536, 00:20:30.174 "uuid": "f23074a4-7a0e-4d3b-86ac-f24d7ed57793", 00:20:30.174 "assigned_rate_limits": { 00:20:30.174 "rw_ios_per_sec": 0, 00:20:30.174 "rw_mbytes_per_sec": 0, 00:20:30.174 "r_mbytes_per_sec": 0, 00:20:30.174 "w_mbytes_per_sec": 0 00:20:30.174 }, 00:20:30.174 "claimed": true, 00:20:30.174 "claim_type": "exclusive_write", 00:20:30.174 "zoned": false, 00:20:30.174 "supported_io_types": { 00:20:30.174 "read": true, 00:20:30.174 "write": true, 00:20:30.174 "unmap": true, 00:20:30.174 "flush": true, 00:20:30.174 "reset": true, 00:20:30.174 "nvme_admin": false, 00:20:30.174 "nvme_io": false, 00:20:30.174 "nvme_io_md": false, 00:20:30.174 "write_zeroes": true, 00:20:30.174 "zcopy": true, 00:20:30.174 "get_zone_info": false, 00:20:30.174 "zone_management": false, 00:20:30.174 "zone_append": false, 00:20:30.174 "compare": false, 00:20:30.174 "compare_and_write": false, 00:20:30.174 "abort": true, 00:20:30.174 "seek_hole": false, 00:20:30.174 "seek_data": false, 00:20:30.174 "copy": true, 00:20:30.174 "nvme_iov_md": false 00:20:30.174 }, 00:20:30.174 "memory_domains": [ 00:20:30.174 { 00:20:30.174 "dma_device_id": "system", 00:20:30.174 "dma_device_type": 1 00:20:30.174 }, 00:20:30.174 { 00:20:30.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.174 "dma_device_type": 2 00:20:30.174 } 00:20:30.174 ], 00:20:30.174 "driver_specific": {} 00:20:30.174 } 00:20:30.174 ] 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.174 "name": "Existed_Raid", 00:20:30.174 "uuid": "a1dde01d-60e4-41be-b467-00c15c0181f5", 00:20:30.174 "strip_size_kb": 64, 00:20:30.174 "state": "configuring", 00:20:30.174 "raid_level": "raid5f", 00:20:30.174 "superblock": true, 00:20:30.174 "num_base_bdevs": 4, 00:20:30.174 "num_base_bdevs_discovered": 3, 00:20:30.174 "num_base_bdevs_operational": 4, 00:20:30.174 "base_bdevs_list": [ 00:20:30.174 { 00:20:30.174 "name": "BaseBdev1", 00:20:30.174 "uuid": "feadfa7c-0b35-46b4-b1f8-f7844e10f19e", 00:20:30.174 "is_configured": true, 00:20:30.174 "data_offset": 2048, 00:20:30.174 "data_size": 63488 00:20:30.174 }, 00:20:30.174 { 00:20:30.174 "name": "BaseBdev2", 00:20:30.174 "uuid": "f980df5b-660f-4374-a5af-a6ed223636fd", 00:20:30.174 "is_configured": true, 00:20:30.174 "data_offset": 2048, 00:20:30.174 "data_size": 63488 00:20:30.174 }, 00:20:30.174 { 00:20:30.174 "name": "BaseBdev3", 00:20:30.174 "uuid": "f23074a4-7a0e-4d3b-86ac-f24d7ed57793", 00:20:30.174 "is_configured": true, 00:20:30.174 "data_offset": 2048, 00:20:30.174 "data_size": 63488 00:20:30.174 }, 00:20:30.174 { 00:20:30.174 "name": "BaseBdev4", 00:20:30.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.174 "is_configured": false, 00:20:30.174 "data_offset": 0, 00:20:30.174 "data_size": 0 00:20:30.174 } 00:20:30.174 ] 00:20:30.174 }' 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.174 14:20:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.433 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:30.433 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.433 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.692 [2024-11-27 14:20:01.414171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:30.692 [2024-11-27 14:20:01.414537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:30.692 [2024-11-27 14:20:01.414553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:30.692 [2024-11-27 14:20:01.414917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:30.692 BaseBdev4 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.692 [2024-11-27 14:20:01.423687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:30.692 [2024-11-27 14:20:01.423793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:30.692 [2024-11-27 14:20:01.424205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.692 [ 00:20:30.692 { 00:20:30.692 "name": "BaseBdev4", 00:20:30.692 "aliases": [ 00:20:30.692 "9deb5fae-f38e-4b9d-8f10-b30d60f7fd4a" 00:20:30.692 ], 00:20:30.692 "product_name": "Malloc disk", 00:20:30.692 "block_size": 512, 00:20:30.692 "num_blocks": 65536, 00:20:30.692 "uuid": "9deb5fae-f38e-4b9d-8f10-b30d60f7fd4a", 00:20:30.692 "assigned_rate_limits": { 00:20:30.692 "rw_ios_per_sec": 0, 00:20:30.692 "rw_mbytes_per_sec": 0, 00:20:30.692 "r_mbytes_per_sec": 0, 00:20:30.692 "w_mbytes_per_sec": 0 00:20:30.692 }, 00:20:30.692 "claimed": true, 00:20:30.692 "claim_type": "exclusive_write", 00:20:30.692 "zoned": false, 00:20:30.692 "supported_io_types": { 00:20:30.692 "read": true, 00:20:30.692 "write": true, 00:20:30.692 "unmap": true, 00:20:30.692 "flush": true, 00:20:30.692 "reset": true, 00:20:30.692 "nvme_admin": false, 00:20:30.692 "nvme_io": false, 00:20:30.692 "nvme_io_md": false, 00:20:30.692 "write_zeroes": true, 00:20:30.692 "zcopy": true, 00:20:30.692 "get_zone_info": false, 00:20:30.692 "zone_management": false, 00:20:30.692 "zone_append": false, 00:20:30.692 "compare": false, 00:20:30.692 "compare_and_write": false, 00:20:30.692 "abort": true, 00:20:30.692 "seek_hole": false, 00:20:30.692 "seek_data": false, 00:20:30.692 "copy": true, 00:20:30.692 "nvme_iov_md": false 00:20:30.692 }, 00:20:30.692 "memory_domains": [ 00:20:30.692 { 00:20:30.692 "dma_device_id": "system", 00:20:30.692 "dma_device_type": 1 00:20:30.692 }, 00:20:30.692 { 00:20:30.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.692 "dma_device_type": 2 00:20:30.692 } 00:20:30.692 ], 00:20:30.692 "driver_specific": {} 00:20:30.692 } 00:20:30.692 ] 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.692 "name": "Existed_Raid", 00:20:30.692 "uuid": "a1dde01d-60e4-41be-b467-00c15c0181f5", 00:20:30.692 "strip_size_kb": 64, 00:20:30.692 "state": "online", 00:20:30.692 "raid_level": "raid5f", 00:20:30.692 "superblock": true, 00:20:30.692 "num_base_bdevs": 4, 00:20:30.692 "num_base_bdevs_discovered": 4, 00:20:30.692 "num_base_bdevs_operational": 4, 00:20:30.692 "base_bdevs_list": [ 00:20:30.692 { 00:20:30.692 "name": "BaseBdev1", 00:20:30.692 "uuid": "feadfa7c-0b35-46b4-b1f8-f7844e10f19e", 00:20:30.692 "is_configured": true, 00:20:30.692 "data_offset": 2048, 00:20:30.692 "data_size": 63488 00:20:30.692 }, 00:20:30.692 { 00:20:30.692 "name": "BaseBdev2", 00:20:30.692 "uuid": "f980df5b-660f-4374-a5af-a6ed223636fd", 00:20:30.692 "is_configured": true, 00:20:30.692 "data_offset": 2048, 00:20:30.692 "data_size": 63488 00:20:30.692 }, 00:20:30.692 { 00:20:30.692 "name": "BaseBdev3", 00:20:30.692 "uuid": "f23074a4-7a0e-4d3b-86ac-f24d7ed57793", 00:20:30.692 "is_configured": true, 00:20:30.692 "data_offset": 2048, 00:20:30.692 "data_size": 63488 00:20:30.692 }, 00:20:30.692 { 00:20:30.692 "name": "BaseBdev4", 00:20:30.692 "uuid": "9deb5fae-f38e-4b9d-8f10-b30d60f7fd4a", 00:20:30.692 "is_configured": true, 00:20:30.692 "data_offset": 2048, 00:20:30.692 "data_size": 63488 00:20:30.692 } 00:20:30.692 ] 00:20:30.692 }' 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.692 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:31.259 [2024-11-27 14:20:01.925461] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:31.259 "name": "Existed_Raid", 00:20:31.259 "aliases": [ 00:20:31.259 "a1dde01d-60e4-41be-b467-00c15c0181f5" 00:20:31.259 ], 00:20:31.259 "product_name": "Raid Volume", 00:20:31.259 "block_size": 512, 00:20:31.259 "num_blocks": 190464, 00:20:31.259 "uuid": "a1dde01d-60e4-41be-b467-00c15c0181f5", 00:20:31.259 "assigned_rate_limits": { 00:20:31.259 "rw_ios_per_sec": 0, 00:20:31.259 "rw_mbytes_per_sec": 0, 00:20:31.259 "r_mbytes_per_sec": 0, 00:20:31.259 "w_mbytes_per_sec": 0 00:20:31.259 }, 00:20:31.259 "claimed": false, 00:20:31.259 "zoned": false, 00:20:31.259 "supported_io_types": { 00:20:31.259 "read": true, 00:20:31.259 "write": true, 00:20:31.259 "unmap": false, 00:20:31.259 "flush": false, 00:20:31.259 "reset": true, 00:20:31.259 "nvme_admin": false, 00:20:31.259 "nvme_io": false, 00:20:31.259 "nvme_io_md": false, 00:20:31.259 "write_zeroes": true, 00:20:31.259 "zcopy": false, 00:20:31.259 "get_zone_info": false, 00:20:31.259 "zone_management": false, 00:20:31.259 "zone_append": false, 00:20:31.259 "compare": false, 00:20:31.259 "compare_and_write": false, 00:20:31.259 "abort": false, 00:20:31.259 "seek_hole": false, 00:20:31.259 "seek_data": false, 00:20:31.259 "copy": false, 00:20:31.259 "nvme_iov_md": false 00:20:31.259 }, 00:20:31.259 "driver_specific": { 00:20:31.259 "raid": { 00:20:31.259 "uuid": "a1dde01d-60e4-41be-b467-00c15c0181f5", 00:20:31.259 "strip_size_kb": 64, 00:20:31.259 "state": "online", 00:20:31.259 "raid_level": "raid5f", 00:20:31.259 "superblock": true, 00:20:31.259 "num_base_bdevs": 4, 00:20:31.259 "num_base_bdevs_discovered": 4, 00:20:31.259 "num_base_bdevs_operational": 4, 00:20:31.259 "base_bdevs_list": [ 00:20:31.259 { 00:20:31.259 "name": "BaseBdev1", 00:20:31.259 "uuid": "feadfa7c-0b35-46b4-b1f8-f7844e10f19e", 00:20:31.259 "is_configured": true, 00:20:31.259 "data_offset": 2048, 00:20:31.259 "data_size": 63488 00:20:31.259 }, 00:20:31.259 { 00:20:31.259 "name": "BaseBdev2", 00:20:31.259 "uuid": "f980df5b-660f-4374-a5af-a6ed223636fd", 00:20:31.259 "is_configured": true, 00:20:31.259 "data_offset": 2048, 00:20:31.259 "data_size": 63488 00:20:31.259 }, 00:20:31.259 { 00:20:31.259 "name": "BaseBdev3", 00:20:31.259 "uuid": "f23074a4-7a0e-4d3b-86ac-f24d7ed57793", 00:20:31.259 "is_configured": true, 00:20:31.259 "data_offset": 2048, 00:20:31.259 "data_size": 63488 00:20:31.259 }, 00:20:31.259 { 00:20:31.259 "name": "BaseBdev4", 00:20:31.259 "uuid": "9deb5fae-f38e-4b9d-8f10-b30d60f7fd4a", 00:20:31.259 "is_configured": true, 00:20:31.259 "data_offset": 2048, 00:20:31.259 "data_size": 63488 00:20:31.259 } 00:20:31.259 ] 00:20:31.259 } 00:20:31.259 } 00:20:31.259 }' 00:20:31.259 14:20:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:31.259 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:31.259 BaseBdev2 00:20:31.259 BaseBdev3 00:20:31.259 BaseBdev4' 00:20:31.259 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.259 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:31.259 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.259 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:31.259 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.259 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.259 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.260 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.518 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:31.518 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:31.518 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:31.518 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.518 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.519 [2024-11-27 14:20:02.224784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.519 "name": "Existed_Raid", 00:20:31.519 "uuid": "a1dde01d-60e4-41be-b467-00c15c0181f5", 00:20:31.519 "strip_size_kb": 64, 00:20:31.519 "state": "online", 00:20:31.519 "raid_level": "raid5f", 00:20:31.519 "superblock": true, 00:20:31.519 "num_base_bdevs": 4, 00:20:31.519 "num_base_bdevs_discovered": 3, 00:20:31.519 "num_base_bdevs_operational": 3, 00:20:31.519 "base_bdevs_list": [ 00:20:31.519 { 00:20:31.519 "name": null, 00:20:31.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.519 "is_configured": false, 00:20:31.519 "data_offset": 0, 00:20:31.519 "data_size": 63488 00:20:31.519 }, 00:20:31.519 { 00:20:31.519 "name": "BaseBdev2", 00:20:31.519 "uuid": "f980df5b-660f-4374-a5af-a6ed223636fd", 00:20:31.519 "is_configured": true, 00:20:31.519 "data_offset": 2048, 00:20:31.519 "data_size": 63488 00:20:31.519 }, 00:20:31.519 { 00:20:31.519 "name": "BaseBdev3", 00:20:31.519 "uuid": "f23074a4-7a0e-4d3b-86ac-f24d7ed57793", 00:20:31.519 "is_configured": true, 00:20:31.519 "data_offset": 2048, 00:20:31.519 "data_size": 63488 00:20:31.519 }, 00:20:31.519 { 00:20:31.519 "name": "BaseBdev4", 00:20:31.519 "uuid": "9deb5fae-f38e-4b9d-8f10-b30d60f7fd4a", 00:20:31.519 "is_configured": true, 00:20:31.519 "data_offset": 2048, 00:20:31.519 "data_size": 63488 00:20:31.519 } 00:20:31.519 ] 00:20:31.519 }' 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.519 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.088 [2024-11-27 14:20:02.821585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:32.088 [2024-11-27 14:20:02.821844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:32.088 [2024-11-27 14:20:02.933788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.088 14:20:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.088 [2024-11-27 14:20:02.993714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.347 [2024-11-27 14:20:03.163491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:32.347 [2024-11-27 14:20:03.163633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.347 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.607 BaseBdev2 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.607 [ 00:20:32.607 { 00:20:32.607 "name": "BaseBdev2", 00:20:32.607 "aliases": [ 00:20:32.607 "e890c9b1-4270-4a03-8db7-3c71ee665f8f" 00:20:32.607 ], 00:20:32.607 "product_name": "Malloc disk", 00:20:32.607 "block_size": 512, 00:20:32.607 "num_blocks": 65536, 00:20:32.607 "uuid": "e890c9b1-4270-4a03-8db7-3c71ee665f8f", 00:20:32.607 "assigned_rate_limits": { 00:20:32.607 "rw_ios_per_sec": 0, 00:20:32.607 "rw_mbytes_per_sec": 0, 00:20:32.607 "r_mbytes_per_sec": 0, 00:20:32.607 "w_mbytes_per_sec": 0 00:20:32.607 }, 00:20:32.607 "claimed": false, 00:20:32.607 "zoned": false, 00:20:32.607 "supported_io_types": { 00:20:32.607 "read": true, 00:20:32.607 "write": true, 00:20:32.607 "unmap": true, 00:20:32.607 "flush": true, 00:20:32.607 "reset": true, 00:20:32.607 "nvme_admin": false, 00:20:32.607 "nvme_io": false, 00:20:32.607 "nvme_io_md": false, 00:20:32.607 "write_zeroes": true, 00:20:32.607 "zcopy": true, 00:20:32.607 "get_zone_info": false, 00:20:32.607 "zone_management": false, 00:20:32.607 "zone_append": false, 00:20:32.607 "compare": false, 00:20:32.607 "compare_and_write": false, 00:20:32.607 "abort": true, 00:20:32.607 "seek_hole": false, 00:20:32.607 "seek_data": false, 00:20:32.607 "copy": true, 00:20:32.607 "nvme_iov_md": false 00:20:32.607 }, 00:20:32.607 "memory_domains": [ 00:20:32.607 { 00:20:32.607 "dma_device_id": "system", 00:20:32.607 "dma_device_type": 1 00:20:32.607 }, 00:20:32.607 { 00:20:32.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.607 "dma_device_type": 2 00:20:32.607 } 00:20:32.607 ], 00:20:32.607 "driver_specific": {} 00:20:32.607 } 00:20:32.607 ] 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:32.607 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.608 BaseBdev3 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.608 [ 00:20:32.608 { 00:20:32.608 "name": "BaseBdev3", 00:20:32.608 "aliases": [ 00:20:32.608 "19a8b04a-24d3-4395-af5a-1940faba557b" 00:20:32.608 ], 00:20:32.608 "product_name": "Malloc disk", 00:20:32.608 "block_size": 512, 00:20:32.608 "num_blocks": 65536, 00:20:32.608 "uuid": "19a8b04a-24d3-4395-af5a-1940faba557b", 00:20:32.608 "assigned_rate_limits": { 00:20:32.608 "rw_ios_per_sec": 0, 00:20:32.608 "rw_mbytes_per_sec": 0, 00:20:32.608 "r_mbytes_per_sec": 0, 00:20:32.608 "w_mbytes_per_sec": 0 00:20:32.608 }, 00:20:32.608 "claimed": false, 00:20:32.608 "zoned": false, 00:20:32.608 "supported_io_types": { 00:20:32.608 "read": true, 00:20:32.608 "write": true, 00:20:32.608 "unmap": true, 00:20:32.608 "flush": true, 00:20:32.608 "reset": true, 00:20:32.608 "nvme_admin": false, 00:20:32.608 "nvme_io": false, 00:20:32.608 "nvme_io_md": false, 00:20:32.608 "write_zeroes": true, 00:20:32.608 "zcopy": true, 00:20:32.608 "get_zone_info": false, 00:20:32.608 "zone_management": false, 00:20:32.608 "zone_append": false, 00:20:32.608 "compare": false, 00:20:32.608 "compare_and_write": false, 00:20:32.608 "abort": true, 00:20:32.608 "seek_hole": false, 00:20:32.608 "seek_data": false, 00:20:32.608 "copy": true, 00:20:32.608 "nvme_iov_md": false 00:20:32.608 }, 00:20:32.608 "memory_domains": [ 00:20:32.608 { 00:20:32.608 "dma_device_id": "system", 00:20:32.608 "dma_device_type": 1 00:20:32.608 }, 00:20:32.608 { 00:20:32.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.608 "dma_device_type": 2 00:20:32.608 } 00:20:32.608 ], 00:20:32.608 "driver_specific": {} 00:20:32.608 } 00:20:32.608 ] 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.608 BaseBdev4 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.608 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.868 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.868 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:32.868 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.868 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.868 [ 00:20:32.868 { 00:20:32.868 "name": "BaseBdev4", 00:20:32.868 "aliases": [ 00:20:32.868 "15aad56d-e0c8-4ea0-af51-ecb1a456e003" 00:20:32.868 ], 00:20:32.868 "product_name": "Malloc disk", 00:20:32.868 "block_size": 512, 00:20:32.868 "num_blocks": 65536, 00:20:32.868 "uuid": "15aad56d-e0c8-4ea0-af51-ecb1a456e003", 00:20:32.868 "assigned_rate_limits": { 00:20:32.868 "rw_ios_per_sec": 0, 00:20:32.868 "rw_mbytes_per_sec": 0, 00:20:32.868 "r_mbytes_per_sec": 0, 00:20:32.868 "w_mbytes_per_sec": 0 00:20:32.868 }, 00:20:32.868 "claimed": false, 00:20:32.868 "zoned": false, 00:20:32.868 "supported_io_types": { 00:20:32.868 "read": true, 00:20:32.868 "write": true, 00:20:32.868 "unmap": true, 00:20:32.868 "flush": true, 00:20:32.868 "reset": true, 00:20:32.868 "nvme_admin": false, 00:20:32.868 "nvme_io": false, 00:20:32.868 "nvme_io_md": false, 00:20:32.868 "write_zeroes": true, 00:20:32.868 "zcopy": true, 00:20:32.868 "get_zone_info": false, 00:20:32.868 "zone_management": false, 00:20:32.868 "zone_append": false, 00:20:32.868 "compare": false, 00:20:32.868 "compare_and_write": false, 00:20:32.868 "abort": true, 00:20:32.868 "seek_hole": false, 00:20:32.868 "seek_data": false, 00:20:32.868 "copy": true, 00:20:32.868 "nvme_iov_md": false 00:20:32.868 }, 00:20:32.868 "memory_domains": [ 00:20:32.868 { 00:20:32.868 "dma_device_id": "system", 00:20:32.868 "dma_device_type": 1 00:20:32.868 }, 00:20:32.868 { 00:20:32.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:32.869 "dma_device_type": 2 00:20:32.869 } 00:20:32.869 ], 00:20:32.869 "driver_specific": {} 00:20:32.869 } 00:20:32.869 ] 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.869 [2024-11-27 14:20:03.591177] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:32.869 [2024-11-27 14:20:03.591306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:32.869 [2024-11-27 14:20:03.591362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:32.869 [2024-11-27 14:20:03.593530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:32.869 [2024-11-27 14:20:03.593672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.869 "name": "Existed_Raid", 00:20:32.869 "uuid": "17059554-abc4-4e09-a671-cc2d7e5994c0", 00:20:32.869 "strip_size_kb": 64, 00:20:32.869 "state": "configuring", 00:20:32.869 "raid_level": "raid5f", 00:20:32.869 "superblock": true, 00:20:32.869 "num_base_bdevs": 4, 00:20:32.869 "num_base_bdevs_discovered": 3, 00:20:32.869 "num_base_bdevs_operational": 4, 00:20:32.869 "base_bdevs_list": [ 00:20:32.869 { 00:20:32.869 "name": "BaseBdev1", 00:20:32.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.869 "is_configured": false, 00:20:32.869 "data_offset": 0, 00:20:32.869 "data_size": 0 00:20:32.869 }, 00:20:32.869 { 00:20:32.869 "name": "BaseBdev2", 00:20:32.869 "uuid": "e890c9b1-4270-4a03-8db7-3c71ee665f8f", 00:20:32.869 "is_configured": true, 00:20:32.869 "data_offset": 2048, 00:20:32.869 "data_size": 63488 00:20:32.869 }, 00:20:32.869 { 00:20:32.869 "name": "BaseBdev3", 00:20:32.869 "uuid": "19a8b04a-24d3-4395-af5a-1940faba557b", 00:20:32.869 "is_configured": true, 00:20:32.869 "data_offset": 2048, 00:20:32.869 "data_size": 63488 00:20:32.869 }, 00:20:32.869 { 00:20:32.869 "name": "BaseBdev4", 00:20:32.869 "uuid": "15aad56d-e0c8-4ea0-af51-ecb1a456e003", 00:20:32.869 "is_configured": true, 00:20:32.869 "data_offset": 2048, 00:20:32.869 "data_size": 63488 00:20:32.869 } 00:20:32.869 ] 00:20:32.869 }' 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.869 14:20:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.129 [2024-11-27 14:20:04.070343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.129 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.388 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.388 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.388 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.388 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.388 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.388 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.388 "name": "Existed_Raid", 00:20:33.388 "uuid": "17059554-abc4-4e09-a671-cc2d7e5994c0", 00:20:33.388 "strip_size_kb": 64, 00:20:33.388 "state": "configuring", 00:20:33.388 "raid_level": "raid5f", 00:20:33.388 "superblock": true, 00:20:33.388 "num_base_bdevs": 4, 00:20:33.388 "num_base_bdevs_discovered": 2, 00:20:33.388 "num_base_bdevs_operational": 4, 00:20:33.388 "base_bdevs_list": [ 00:20:33.388 { 00:20:33.388 "name": "BaseBdev1", 00:20:33.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.388 "is_configured": false, 00:20:33.388 "data_offset": 0, 00:20:33.388 "data_size": 0 00:20:33.388 }, 00:20:33.388 { 00:20:33.388 "name": null, 00:20:33.388 "uuid": "e890c9b1-4270-4a03-8db7-3c71ee665f8f", 00:20:33.388 "is_configured": false, 00:20:33.388 "data_offset": 0, 00:20:33.388 "data_size": 63488 00:20:33.388 }, 00:20:33.388 { 00:20:33.388 "name": "BaseBdev3", 00:20:33.388 "uuid": "19a8b04a-24d3-4395-af5a-1940faba557b", 00:20:33.388 "is_configured": true, 00:20:33.388 "data_offset": 2048, 00:20:33.388 "data_size": 63488 00:20:33.388 }, 00:20:33.388 { 00:20:33.388 "name": "BaseBdev4", 00:20:33.388 "uuid": "15aad56d-e0c8-4ea0-af51-ecb1a456e003", 00:20:33.388 "is_configured": true, 00:20:33.388 "data_offset": 2048, 00:20:33.388 "data_size": 63488 00:20:33.388 } 00:20:33.388 ] 00:20:33.388 }' 00:20:33.388 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.388 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.646 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:33.646 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.646 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.646 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.646 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.646 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:33.646 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:33.646 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.908 [2024-11-27 14:20:04.641562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:33.908 BaseBdev1 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.908 [ 00:20:33.908 { 00:20:33.908 "name": "BaseBdev1", 00:20:33.908 "aliases": [ 00:20:33.908 "78fc8dbf-4f13-4a97-8340-294d281aff82" 00:20:33.908 ], 00:20:33.908 "product_name": "Malloc disk", 00:20:33.908 "block_size": 512, 00:20:33.908 "num_blocks": 65536, 00:20:33.908 "uuid": "78fc8dbf-4f13-4a97-8340-294d281aff82", 00:20:33.908 "assigned_rate_limits": { 00:20:33.908 "rw_ios_per_sec": 0, 00:20:33.908 "rw_mbytes_per_sec": 0, 00:20:33.908 "r_mbytes_per_sec": 0, 00:20:33.908 "w_mbytes_per_sec": 0 00:20:33.908 }, 00:20:33.908 "claimed": true, 00:20:33.908 "claim_type": "exclusive_write", 00:20:33.908 "zoned": false, 00:20:33.908 "supported_io_types": { 00:20:33.908 "read": true, 00:20:33.908 "write": true, 00:20:33.908 "unmap": true, 00:20:33.908 "flush": true, 00:20:33.908 "reset": true, 00:20:33.908 "nvme_admin": false, 00:20:33.908 "nvme_io": false, 00:20:33.908 "nvme_io_md": false, 00:20:33.908 "write_zeroes": true, 00:20:33.908 "zcopy": true, 00:20:33.908 "get_zone_info": false, 00:20:33.908 "zone_management": false, 00:20:33.908 "zone_append": false, 00:20:33.908 "compare": false, 00:20:33.908 "compare_and_write": false, 00:20:33.908 "abort": true, 00:20:33.908 "seek_hole": false, 00:20:33.908 "seek_data": false, 00:20:33.908 "copy": true, 00:20:33.908 "nvme_iov_md": false 00:20:33.908 }, 00:20:33.908 "memory_domains": [ 00:20:33.908 { 00:20:33.908 "dma_device_id": "system", 00:20:33.908 "dma_device_type": 1 00:20:33.908 }, 00:20:33.908 { 00:20:33.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.908 "dma_device_type": 2 00:20:33.908 } 00:20:33.908 ], 00:20:33.908 "driver_specific": {} 00:20:33.908 } 00:20:33.908 ] 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.908 "name": "Existed_Raid", 00:20:33.908 "uuid": "17059554-abc4-4e09-a671-cc2d7e5994c0", 00:20:33.908 "strip_size_kb": 64, 00:20:33.908 "state": "configuring", 00:20:33.908 "raid_level": "raid5f", 00:20:33.908 "superblock": true, 00:20:33.908 "num_base_bdevs": 4, 00:20:33.908 "num_base_bdevs_discovered": 3, 00:20:33.908 "num_base_bdevs_operational": 4, 00:20:33.908 "base_bdevs_list": [ 00:20:33.908 { 00:20:33.908 "name": "BaseBdev1", 00:20:33.908 "uuid": "78fc8dbf-4f13-4a97-8340-294d281aff82", 00:20:33.908 "is_configured": true, 00:20:33.908 "data_offset": 2048, 00:20:33.908 "data_size": 63488 00:20:33.908 }, 00:20:33.908 { 00:20:33.908 "name": null, 00:20:33.908 "uuid": "e890c9b1-4270-4a03-8db7-3c71ee665f8f", 00:20:33.908 "is_configured": false, 00:20:33.908 "data_offset": 0, 00:20:33.908 "data_size": 63488 00:20:33.908 }, 00:20:33.908 { 00:20:33.908 "name": "BaseBdev3", 00:20:33.908 "uuid": "19a8b04a-24d3-4395-af5a-1940faba557b", 00:20:33.908 "is_configured": true, 00:20:33.908 "data_offset": 2048, 00:20:33.908 "data_size": 63488 00:20:33.908 }, 00:20:33.908 { 00:20:33.908 "name": "BaseBdev4", 00:20:33.908 "uuid": "15aad56d-e0c8-4ea0-af51-ecb1a456e003", 00:20:33.908 "is_configured": true, 00:20:33.908 "data_offset": 2048, 00:20:33.908 "data_size": 63488 00:20:33.908 } 00:20:33.908 ] 00:20:33.908 }' 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.908 14:20:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.480 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.480 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:34.480 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.480 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.480 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.480 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:34.480 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:34.480 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.480 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.480 [2024-11-27 14:20:05.204795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:34.480 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.481 "name": "Existed_Raid", 00:20:34.481 "uuid": "17059554-abc4-4e09-a671-cc2d7e5994c0", 00:20:34.481 "strip_size_kb": 64, 00:20:34.481 "state": "configuring", 00:20:34.481 "raid_level": "raid5f", 00:20:34.481 "superblock": true, 00:20:34.481 "num_base_bdevs": 4, 00:20:34.481 "num_base_bdevs_discovered": 2, 00:20:34.481 "num_base_bdevs_operational": 4, 00:20:34.481 "base_bdevs_list": [ 00:20:34.481 { 00:20:34.481 "name": "BaseBdev1", 00:20:34.481 "uuid": "78fc8dbf-4f13-4a97-8340-294d281aff82", 00:20:34.481 "is_configured": true, 00:20:34.481 "data_offset": 2048, 00:20:34.481 "data_size": 63488 00:20:34.481 }, 00:20:34.481 { 00:20:34.481 "name": null, 00:20:34.481 "uuid": "e890c9b1-4270-4a03-8db7-3c71ee665f8f", 00:20:34.481 "is_configured": false, 00:20:34.481 "data_offset": 0, 00:20:34.481 "data_size": 63488 00:20:34.481 }, 00:20:34.481 { 00:20:34.481 "name": null, 00:20:34.481 "uuid": "19a8b04a-24d3-4395-af5a-1940faba557b", 00:20:34.481 "is_configured": false, 00:20:34.481 "data_offset": 0, 00:20:34.481 "data_size": 63488 00:20:34.481 }, 00:20:34.481 { 00:20:34.481 "name": "BaseBdev4", 00:20:34.481 "uuid": "15aad56d-e0c8-4ea0-af51-ecb1a456e003", 00:20:34.481 "is_configured": true, 00:20:34.481 "data_offset": 2048, 00:20:34.481 "data_size": 63488 00:20:34.481 } 00:20:34.481 ] 00:20:34.481 }' 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.481 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.741 [2024-11-27 14:20:05.679976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.741 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.000 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.000 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.000 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.000 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.000 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.000 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.000 "name": "Existed_Raid", 00:20:35.000 "uuid": "17059554-abc4-4e09-a671-cc2d7e5994c0", 00:20:35.000 "strip_size_kb": 64, 00:20:35.000 "state": "configuring", 00:20:35.000 "raid_level": "raid5f", 00:20:35.000 "superblock": true, 00:20:35.000 "num_base_bdevs": 4, 00:20:35.000 "num_base_bdevs_discovered": 3, 00:20:35.000 "num_base_bdevs_operational": 4, 00:20:35.000 "base_bdevs_list": [ 00:20:35.000 { 00:20:35.000 "name": "BaseBdev1", 00:20:35.000 "uuid": "78fc8dbf-4f13-4a97-8340-294d281aff82", 00:20:35.000 "is_configured": true, 00:20:35.000 "data_offset": 2048, 00:20:35.000 "data_size": 63488 00:20:35.000 }, 00:20:35.000 { 00:20:35.000 "name": null, 00:20:35.000 "uuid": "e890c9b1-4270-4a03-8db7-3c71ee665f8f", 00:20:35.000 "is_configured": false, 00:20:35.000 "data_offset": 0, 00:20:35.000 "data_size": 63488 00:20:35.000 }, 00:20:35.000 { 00:20:35.000 "name": "BaseBdev3", 00:20:35.000 "uuid": "19a8b04a-24d3-4395-af5a-1940faba557b", 00:20:35.000 "is_configured": true, 00:20:35.000 "data_offset": 2048, 00:20:35.000 "data_size": 63488 00:20:35.000 }, 00:20:35.000 { 00:20:35.000 "name": "BaseBdev4", 00:20:35.000 "uuid": "15aad56d-e0c8-4ea0-af51-ecb1a456e003", 00:20:35.000 "is_configured": true, 00:20:35.000 "data_offset": 2048, 00:20:35.000 "data_size": 63488 00:20:35.000 } 00:20:35.000 ] 00:20:35.000 }' 00:20:35.000 14:20:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.000 14:20:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.259 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:35.259 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.259 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.259 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.259 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.519 [2024-11-27 14:20:06.227074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.519 "name": "Existed_Raid", 00:20:35.519 "uuid": "17059554-abc4-4e09-a671-cc2d7e5994c0", 00:20:35.519 "strip_size_kb": 64, 00:20:35.519 "state": "configuring", 00:20:35.519 "raid_level": "raid5f", 00:20:35.519 "superblock": true, 00:20:35.519 "num_base_bdevs": 4, 00:20:35.519 "num_base_bdevs_discovered": 2, 00:20:35.519 "num_base_bdevs_operational": 4, 00:20:35.519 "base_bdevs_list": [ 00:20:35.519 { 00:20:35.519 "name": null, 00:20:35.519 "uuid": "78fc8dbf-4f13-4a97-8340-294d281aff82", 00:20:35.519 "is_configured": false, 00:20:35.519 "data_offset": 0, 00:20:35.519 "data_size": 63488 00:20:35.519 }, 00:20:35.519 { 00:20:35.519 "name": null, 00:20:35.519 "uuid": "e890c9b1-4270-4a03-8db7-3c71ee665f8f", 00:20:35.519 "is_configured": false, 00:20:35.519 "data_offset": 0, 00:20:35.519 "data_size": 63488 00:20:35.519 }, 00:20:35.519 { 00:20:35.519 "name": "BaseBdev3", 00:20:35.519 "uuid": "19a8b04a-24d3-4395-af5a-1940faba557b", 00:20:35.519 "is_configured": true, 00:20:35.519 "data_offset": 2048, 00:20:35.519 "data_size": 63488 00:20:35.519 }, 00:20:35.519 { 00:20:35.519 "name": "BaseBdev4", 00:20:35.519 "uuid": "15aad56d-e0c8-4ea0-af51-ecb1a456e003", 00:20:35.519 "is_configured": true, 00:20:35.519 "data_offset": 2048, 00:20:35.519 "data_size": 63488 00:20:35.519 } 00:20:35.519 ] 00:20:35.519 }' 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.519 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.088 [2024-11-27 14:20:06.823909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.088 "name": "Existed_Raid", 00:20:36.088 "uuid": "17059554-abc4-4e09-a671-cc2d7e5994c0", 00:20:36.088 "strip_size_kb": 64, 00:20:36.088 "state": "configuring", 00:20:36.088 "raid_level": "raid5f", 00:20:36.088 "superblock": true, 00:20:36.088 "num_base_bdevs": 4, 00:20:36.088 "num_base_bdevs_discovered": 3, 00:20:36.088 "num_base_bdevs_operational": 4, 00:20:36.088 "base_bdevs_list": [ 00:20:36.088 { 00:20:36.088 "name": null, 00:20:36.088 "uuid": "78fc8dbf-4f13-4a97-8340-294d281aff82", 00:20:36.088 "is_configured": false, 00:20:36.088 "data_offset": 0, 00:20:36.088 "data_size": 63488 00:20:36.088 }, 00:20:36.088 { 00:20:36.088 "name": "BaseBdev2", 00:20:36.088 "uuid": "e890c9b1-4270-4a03-8db7-3c71ee665f8f", 00:20:36.088 "is_configured": true, 00:20:36.088 "data_offset": 2048, 00:20:36.088 "data_size": 63488 00:20:36.088 }, 00:20:36.088 { 00:20:36.088 "name": "BaseBdev3", 00:20:36.088 "uuid": "19a8b04a-24d3-4395-af5a-1940faba557b", 00:20:36.088 "is_configured": true, 00:20:36.088 "data_offset": 2048, 00:20:36.088 "data_size": 63488 00:20:36.088 }, 00:20:36.088 { 00:20:36.088 "name": "BaseBdev4", 00:20:36.088 "uuid": "15aad56d-e0c8-4ea0-af51-ecb1a456e003", 00:20:36.088 "is_configured": true, 00:20:36.088 "data_offset": 2048, 00:20:36.088 "data_size": 63488 00:20:36.088 } 00:20:36.088 ] 00:20:36.088 }' 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.088 14:20:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.347 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.347 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:36.347 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.347 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.347 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 78fc8dbf-4f13-4a97-8340-294d281aff82 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.606 [2024-11-27 14:20:07.409362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:36.606 [2024-11-27 14:20:07.409644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:36.606 [2024-11-27 14:20:07.409659] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:36.606 [2024-11-27 14:20:07.409952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:36.606 NewBaseBdev 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.606 [2024-11-27 14:20:07.418259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:36.606 [2024-11-27 14:20:07.418299] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:36.606 [2024-11-27 14:20:07.418633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.606 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.606 [ 00:20:36.606 { 00:20:36.606 "name": "NewBaseBdev", 00:20:36.606 "aliases": [ 00:20:36.606 "78fc8dbf-4f13-4a97-8340-294d281aff82" 00:20:36.606 ], 00:20:36.606 "product_name": "Malloc disk", 00:20:36.606 "block_size": 512, 00:20:36.606 "num_blocks": 65536, 00:20:36.606 "uuid": "78fc8dbf-4f13-4a97-8340-294d281aff82", 00:20:36.606 "assigned_rate_limits": { 00:20:36.606 "rw_ios_per_sec": 0, 00:20:36.607 "rw_mbytes_per_sec": 0, 00:20:36.607 "r_mbytes_per_sec": 0, 00:20:36.607 "w_mbytes_per_sec": 0 00:20:36.607 }, 00:20:36.607 "claimed": true, 00:20:36.607 "claim_type": "exclusive_write", 00:20:36.607 "zoned": false, 00:20:36.607 "supported_io_types": { 00:20:36.607 "read": true, 00:20:36.607 "write": true, 00:20:36.607 "unmap": true, 00:20:36.607 "flush": true, 00:20:36.607 "reset": true, 00:20:36.607 "nvme_admin": false, 00:20:36.607 "nvme_io": false, 00:20:36.607 "nvme_io_md": false, 00:20:36.607 "write_zeroes": true, 00:20:36.607 "zcopy": true, 00:20:36.607 "get_zone_info": false, 00:20:36.607 "zone_management": false, 00:20:36.607 "zone_append": false, 00:20:36.607 "compare": false, 00:20:36.607 "compare_and_write": false, 00:20:36.607 "abort": true, 00:20:36.607 "seek_hole": false, 00:20:36.607 "seek_data": false, 00:20:36.607 "copy": true, 00:20:36.607 "nvme_iov_md": false 00:20:36.607 }, 00:20:36.607 "memory_domains": [ 00:20:36.607 { 00:20:36.607 "dma_device_id": "system", 00:20:36.607 "dma_device_type": 1 00:20:36.607 }, 00:20:36.607 { 00:20:36.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.607 "dma_device_type": 2 00:20:36.607 } 00:20:36.607 ], 00:20:36.607 "driver_specific": {} 00:20:36.607 } 00:20:36.607 ] 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.607 "name": "Existed_Raid", 00:20:36.607 "uuid": "17059554-abc4-4e09-a671-cc2d7e5994c0", 00:20:36.607 "strip_size_kb": 64, 00:20:36.607 "state": "online", 00:20:36.607 "raid_level": "raid5f", 00:20:36.607 "superblock": true, 00:20:36.607 "num_base_bdevs": 4, 00:20:36.607 "num_base_bdevs_discovered": 4, 00:20:36.607 "num_base_bdevs_operational": 4, 00:20:36.607 "base_bdevs_list": [ 00:20:36.607 { 00:20:36.607 "name": "NewBaseBdev", 00:20:36.607 "uuid": "78fc8dbf-4f13-4a97-8340-294d281aff82", 00:20:36.607 "is_configured": true, 00:20:36.607 "data_offset": 2048, 00:20:36.607 "data_size": 63488 00:20:36.607 }, 00:20:36.607 { 00:20:36.607 "name": "BaseBdev2", 00:20:36.607 "uuid": "e890c9b1-4270-4a03-8db7-3c71ee665f8f", 00:20:36.607 "is_configured": true, 00:20:36.607 "data_offset": 2048, 00:20:36.607 "data_size": 63488 00:20:36.607 }, 00:20:36.607 { 00:20:36.607 "name": "BaseBdev3", 00:20:36.607 "uuid": "19a8b04a-24d3-4395-af5a-1940faba557b", 00:20:36.607 "is_configured": true, 00:20:36.607 "data_offset": 2048, 00:20:36.607 "data_size": 63488 00:20:36.607 }, 00:20:36.607 { 00:20:36.607 "name": "BaseBdev4", 00:20:36.607 "uuid": "15aad56d-e0c8-4ea0-af51-ecb1a456e003", 00:20:36.607 "is_configured": true, 00:20:36.607 "data_offset": 2048, 00:20:36.607 "data_size": 63488 00:20:36.607 } 00:20:36.607 ] 00:20:36.607 }' 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.607 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.175 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:37.175 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:37.175 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:37.175 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:37.175 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:37.175 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:37.175 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:37.175 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:37.176 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.176 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.176 [2024-11-27 14:20:07.924326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:37.176 14:20:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.176 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:37.176 "name": "Existed_Raid", 00:20:37.176 "aliases": [ 00:20:37.176 "17059554-abc4-4e09-a671-cc2d7e5994c0" 00:20:37.176 ], 00:20:37.176 "product_name": "Raid Volume", 00:20:37.176 "block_size": 512, 00:20:37.176 "num_blocks": 190464, 00:20:37.176 "uuid": "17059554-abc4-4e09-a671-cc2d7e5994c0", 00:20:37.176 "assigned_rate_limits": { 00:20:37.176 "rw_ios_per_sec": 0, 00:20:37.176 "rw_mbytes_per_sec": 0, 00:20:37.176 "r_mbytes_per_sec": 0, 00:20:37.176 "w_mbytes_per_sec": 0 00:20:37.176 }, 00:20:37.176 "claimed": false, 00:20:37.176 "zoned": false, 00:20:37.176 "supported_io_types": { 00:20:37.176 "read": true, 00:20:37.176 "write": true, 00:20:37.176 "unmap": false, 00:20:37.176 "flush": false, 00:20:37.176 "reset": true, 00:20:37.176 "nvme_admin": false, 00:20:37.176 "nvme_io": false, 00:20:37.176 "nvme_io_md": false, 00:20:37.176 "write_zeroes": true, 00:20:37.176 "zcopy": false, 00:20:37.176 "get_zone_info": false, 00:20:37.176 "zone_management": false, 00:20:37.176 "zone_append": false, 00:20:37.176 "compare": false, 00:20:37.176 "compare_and_write": false, 00:20:37.176 "abort": false, 00:20:37.176 "seek_hole": false, 00:20:37.176 "seek_data": false, 00:20:37.176 "copy": false, 00:20:37.176 "nvme_iov_md": false 00:20:37.176 }, 00:20:37.176 "driver_specific": { 00:20:37.176 "raid": { 00:20:37.176 "uuid": "17059554-abc4-4e09-a671-cc2d7e5994c0", 00:20:37.176 "strip_size_kb": 64, 00:20:37.176 "state": "online", 00:20:37.176 "raid_level": "raid5f", 00:20:37.176 "superblock": true, 00:20:37.176 "num_base_bdevs": 4, 00:20:37.176 "num_base_bdevs_discovered": 4, 00:20:37.176 "num_base_bdevs_operational": 4, 00:20:37.176 "base_bdevs_list": [ 00:20:37.176 { 00:20:37.176 "name": "NewBaseBdev", 00:20:37.176 "uuid": "78fc8dbf-4f13-4a97-8340-294d281aff82", 00:20:37.176 "is_configured": true, 00:20:37.176 "data_offset": 2048, 00:20:37.176 "data_size": 63488 00:20:37.176 }, 00:20:37.176 { 00:20:37.176 "name": "BaseBdev2", 00:20:37.176 "uuid": "e890c9b1-4270-4a03-8db7-3c71ee665f8f", 00:20:37.176 "is_configured": true, 00:20:37.176 "data_offset": 2048, 00:20:37.176 "data_size": 63488 00:20:37.176 }, 00:20:37.176 { 00:20:37.176 "name": "BaseBdev3", 00:20:37.176 "uuid": "19a8b04a-24d3-4395-af5a-1940faba557b", 00:20:37.176 "is_configured": true, 00:20:37.176 "data_offset": 2048, 00:20:37.176 "data_size": 63488 00:20:37.176 }, 00:20:37.176 { 00:20:37.176 "name": "BaseBdev4", 00:20:37.176 "uuid": "15aad56d-e0c8-4ea0-af51-ecb1a456e003", 00:20:37.176 "is_configured": true, 00:20:37.176 "data_offset": 2048, 00:20:37.176 "data_size": 63488 00:20:37.176 } 00:20:37.176 ] 00:20:37.176 } 00:20:37.176 } 00:20:37.176 }' 00:20:37.176 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:37.176 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:37.176 BaseBdev2 00:20:37.176 BaseBdev3 00:20:37.176 BaseBdev4' 00:20:37.176 14:20:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.176 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.436 [2024-11-27 14:20:08.239531] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:37.436 [2024-11-27 14:20:08.239631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.436 [2024-11-27 14:20:08.239770] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.436 [2024-11-27 14:20:08.240185] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.436 [2024-11-27 14:20:08.240208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83686 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83686 ']' 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83686 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83686 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83686' 00:20:37.436 killing process with pid 83686 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83686 00:20:37.436 [2024-11-27 14:20:08.282801] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:37.436 14:20:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83686 00:20:38.004 [2024-11-27 14:20:08.720225] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:39.381 14:20:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:39.381 00:20:39.381 real 0m12.194s 00:20:39.381 user 0m19.274s 00:20:39.381 sys 0m2.215s 00:20:39.381 ************************************ 00:20:39.381 END TEST raid5f_state_function_test_sb 00:20:39.381 ************************************ 00:20:39.381 14:20:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.381 14:20:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.381 14:20:10 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:20:39.381 14:20:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:39.381 14:20:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.381 14:20:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.381 ************************************ 00:20:39.381 START TEST raid5f_superblock_test 00:20:39.381 ************************************ 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84357 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84357 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84357 ']' 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.381 14:20:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.381 [2024-11-27 14:20:10.113160] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:39.381 [2024-11-27 14:20:10.113285] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84357 ] 00:20:39.381 [2024-11-27 14:20:10.289221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.640 [2024-11-27 14:20:10.413234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.899 [2024-11-27 14:20:10.626001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:39.899 [2024-11-27 14:20:10.626035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.166 14:20:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.166 malloc1 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.166 [2024-11-27 14:20:11.023811] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:40.166 [2024-11-27 14:20:11.023946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.166 [2024-11-27 14:20:11.023987] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:40.166 [2024-11-27 14:20:11.024018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.166 [2024-11-27 14:20:11.026182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.166 [2024-11-27 14:20:11.026253] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:40.166 pt1 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.166 malloc2 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.166 [2024-11-27 14:20:11.085704] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:40.166 [2024-11-27 14:20:11.085805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.166 [2024-11-27 14:20:11.085851] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:40.166 [2024-11-27 14:20:11.085860] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.166 [2024-11-27 14:20:11.088022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.166 [2024-11-27 14:20:11.088063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:40.166 pt2 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.166 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.435 malloc3 00:20:40.435 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.435 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:40.435 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.435 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.435 [2024-11-27 14:20:11.153936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:40.435 [2024-11-27 14:20:11.154048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.435 [2024-11-27 14:20:11.154109] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:40.435 [2024-11-27 14:20:11.154163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.435 [2024-11-27 14:20:11.156576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.435 [2024-11-27 14:20:11.156655] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:40.435 pt3 00:20:40.435 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.435 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:40.435 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:40.435 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:20:40.435 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:20:40.435 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:40.435 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.436 malloc4 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.436 [2024-11-27 14:20:11.214920] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:40.436 [2024-11-27 14:20:11.215045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.436 [2024-11-27 14:20:11.215088] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:40.436 [2024-11-27 14:20:11.215126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.436 [2024-11-27 14:20:11.217382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.436 [2024-11-27 14:20:11.217455] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:40.436 pt4 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.436 [2024-11-27 14:20:11.226930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:40.436 [2024-11-27 14:20:11.228885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:40.436 [2024-11-27 14:20:11.229063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:40.436 [2024-11-27 14:20:11.229154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:40.436 [2024-11-27 14:20:11.229402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:40.436 [2024-11-27 14:20:11.229423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:40.436 [2024-11-27 14:20:11.229751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:40.436 [2024-11-27 14:20:11.238084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:40.436 [2024-11-27 14:20:11.238166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:40.436 [2024-11-27 14:20:11.238425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.436 "name": "raid_bdev1", 00:20:40.436 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:40.436 "strip_size_kb": 64, 00:20:40.436 "state": "online", 00:20:40.436 "raid_level": "raid5f", 00:20:40.436 "superblock": true, 00:20:40.436 "num_base_bdevs": 4, 00:20:40.436 "num_base_bdevs_discovered": 4, 00:20:40.436 "num_base_bdevs_operational": 4, 00:20:40.436 "base_bdevs_list": [ 00:20:40.436 { 00:20:40.436 "name": "pt1", 00:20:40.436 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:40.436 "is_configured": true, 00:20:40.436 "data_offset": 2048, 00:20:40.436 "data_size": 63488 00:20:40.436 }, 00:20:40.436 { 00:20:40.436 "name": "pt2", 00:20:40.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.436 "is_configured": true, 00:20:40.436 "data_offset": 2048, 00:20:40.436 "data_size": 63488 00:20:40.436 }, 00:20:40.436 { 00:20:40.436 "name": "pt3", 00:20:40.436 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:40.436 "is_configured": true, 00:20:40.436 "data_offset": 2048, 00:20:40.436 "data_size": 63488 00:20:40.436 }, 00:20:40.436 { 00:20:40.436 "name": "pt4", 00:20:40.436 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:40.436 "is_configured": true, 00:20:40.436 "data_offset": 2048, 00:20:40.436 "data_size": 63488 00:20:40.436 } 00:20:40.436 ] 00:20:40.436 }' 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.436 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.005 [2024-11-27 14:20:11.692024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.005 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:41.005 "name": "raid_bdev1", 00:20:41.005 "aliases": [ 00:20:41.005 "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d" 00:20:41.005 ], 00:20:41.005 "product_name": "Raid Volume", 00:20:41.005 "block_size": 512, 00:20:41.005 "num_blocks": 190464, 00:20:41.005 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:41.005 "assigned_rate_limits": { 00:20:41.005 "rw_ios_per_sec": 0, 00:20:41.005 "rw_mbytes_per_sec": 0, 00:20:41.005 "r_mbytes_per_sec": 0, 00:20:41.005 "w_mbytes_per_sec": 0 00:20:41.005 }, 00:20:41.005 "claimed": false, 00:20:41.005 "zoned": false, 00:20:41.005 "supported_io_types": { 00:20:41.005 "read": true, 00:20:41.005 "write": true, 00:20:41.005 "unmap": false, 00:20:41.005 "flush": false, 00:20:41.005 "reset": true, 00:20:41.005 "nvme_admin": false, 00:20:41.005 "nvme_io": false, 00:20:41.005 "nvme_io_md": false, 00:20:41.005 "write_zeroes": true, 00:20:41.005 "zcopy": false, 00:20:41.005 "get_zone_info": false, 00:20:41.005 "zone_management": false, 00:20:41.005 "zone_append": false, 00:20:41.005 "compare": false, 00:20:41.005 "compare_and_write": false, 00:20:41.005 "abort": false, 00:20:41.005 "seek_hole": false, 00:20:41.005 "seek_data": false, 00:20:41.005 "copy": false, 00:20:41.005 "nvme_iov_md": false 00:20:41.005 }, 00:20:41.005 "driver_specific": { 00:20:41.005 "raid": { 00:20:41.005 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:41.005 "strip_size_kb": 64, 00:20:41.005 "state": "online", 00:20:41.005 "raid_level": "raid5f", 00:20:41.006 "superblock": true, 00:20:41.006 "num_base_bdevs": 4, 00:20:41.006 "num_base_bdevs_discovered": 4, 00:20:41.006 "num_base_bdevs_operational": 4, 00:20:41.006 "base_bdevs_list": [ 00:20:41.006 { 00:20:41.006 "name": "pt1", 00:20:41.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:41.006 "is_configured": true, 00:20:41.006 "data_offset": 2048, 00:20:41.006 "data_size": 63488 00:20:41.006 }, 00:20:41.006 { 00:20:41.006 "name": "pt2", 00:20:41.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.006 "is_configured": true, 00:20:41.006 "data_offset": 2048, 00:20:41.006 "data_size": 63488 00:20:41.006 }, 00:20:41.006 { 00:20:41.006 "name": "pt3", 00:20:41.006 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:41.006 "is_configured": true, 00:20:41.006 "data_offset": 2048, 00:20:41.006 "data_size": 63488 00:20:41.006 }, 00:20:41.006 { 00:20:41.006 "name": "pt4", 00:20:41.006 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:41.006 "is_configured": true, 00:20:41.006 "data_offset": 2048, 00:20:41.006 "data_size": 63488 00:20:41.006 } 00:20:41.006 ] 00:20:41.006 } 00:20:41.006 } 00:20:41.006 }' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:41.006 pt2 00:20:41.006 pt3 00:20:41.006 pt4' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.006 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.267 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:41.267 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:41.267 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:41.267 14:20:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:41.267 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.267 14:20:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 [2024-11-27 14:20:11.995444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d ']' 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 [2024-11-27 14:20:12.039197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:41.267 [2024-11-27 14:20:12.039223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:41.267 [2024-11-27 14:20:12.039300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.267 [2024-11-27 14:20:12.039384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.267 [2024-11-27 14:20:12.039400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.267 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.268 [2024-11-27 14:20:12.186965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:41.268 [2024-11-27 14:20:12.189008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:41.268 [2024-11-27 14:20:12.189069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:41.268 [2024-11-27 14:20:12.189102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:41.268 [2024-11-27 14:20:12.189172] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:41.268 [2024-11-27 14:20:12.189215] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:41.268 [2024-11-27 14:20:12.189236] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:41.268 [2024-11-27 14:20:12.189254] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:41.268 [2024-11-27 14:20:12.189267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:41.268 [2024-11-27 14:20:12.189278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:41.268 request: 00:20:41.268 { 00:20:41.268 "name": "raid_bdev1", 00:20:41.268 "raid_level": "raid5f", 00:20:41.268 "base_bdevs": [ 00:20:41.268 "malloc1", 00:20:41.268 "malloc2", 00:20:41.268 "malloc3", 00:20:41.268 "malloc4" 00:20:41.268 ], 00:20:41.268 "strip_size_kb": 64, 00:20:41.268 "superblock": false, 00:20:41.268 "method": "bdev_raid_create", 00:20:41.268 "req_id": 1 00:20:41.268 } 00:20:41.268 Got JSON-RPC error response 00:20:41.268 response: 00:20:41.268 { 00:20:41.268 "code": -17, 00:20:41.268 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:41.268 } 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:41.268 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.528 [2024-11-27 14:20:12.254805] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:41.528 [2024-11-27 14:20:12.254914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.528 [2024-11-27 14:20:12.254947] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:41.528 [2024-11-27 14:20:12.254977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.528 [2024-11-27 14:20:12.257282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.528 [2024-11-27 14:20:12.257355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:41.528 [2024-11-27 14:20:12.257461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:41.528 [2024-11-27 14:20:12.257552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:41.528 pt1 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.528 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.528 "name": "raid_bdev1", 00:20:41.528 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:41.528 "strip_size_kb": 64, 00:20:41.528 "state": "configuring", 00:20:41.528 "raid_level": "raid5f", 00:20:41.528 "superblock": true, 00:20:41.528 "num_base_bdevs": 4, 00:20:41.528 "num_base_bdevs_discovered": 1, 00:20:41.528 "num_base_bdevs_operational": 4, 00:20:41.528 "base_bdevs_list": [ 00:20:41.528 { 00:20:41.528 "name": "pt1", 00:20:41.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:41.529 "is_configured": true, 00:20:41.529 "data_offset": 2048, 00:20:41.529 "data_size": 63488 00:20:41.529 }, 00:20:41.529 { 00:20:41.529 "name": null, 00:20:41.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.529 "is_configured": false, 00:20:41.529 "data_offset": 2048, 00:20:41.529 "data_size": 63488 00:20:41.529 }, 00:20:41.529 { 00:20:41.529 "name": null, 00:20:41.529 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:41.529 "is_configured": false, 00:20:41.529 "data_offset": 2048, 00:20:41.529 "data_size": 63488 00:20:41.529 }, 00:20:41.529 { 00:20:41.529 "name": null, 00:20:41.529 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:41.529 "is_configured": false, 00:20:41.529 "data_offset": 2048, 00:20:41.529 "data_size": 63488 00:20:41.529 } 00:20:41.529 ] 00:20:41.529 }' 00:20:41.529 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.529 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.097 [2024-11-27 14:20:12.750025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:42.097 [2024-11-27 14:20:12.750182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.097 [2024-11-27 14:20:12.750226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:42.097 [2024-11-27 14:20:12.750260] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.097 [2024-11-27 14:20:12.750810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.097 [2024-11-27 14:20:12.750886] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:42.097 [2024-11-27 14:20:12.751028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:42.097 [2024-11-27 14:20:12.751093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:42.097 pt2 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.097 [2024-11-27 14:20:12.758019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.097 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.097 "name": "raid_bdev1", 00:20:42.097 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:42.097 "strip_size_kb": 64, 00:20:42.097 "state": "configuring", 00:20:42.097 "raid_level": "raid5f", 00:20:42.097 "superblock": true, 00:20:42.097 "num_base_bdevs": 4, 00:20:42.097 "num_base_bdevs_discovered": 1, 00:20:42.097 "num_base_bdevs_operational": 4, 00:20:42.097 "base_bdevs_list": [ 00:20:42.097 { 00:20:42.097 "name": "pt1", 00:20:42.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:42.097 "is_configured": true, 00:20:42.097 "data_offset": 2048, 00:20:42.097 "data_size": 63488 00:20:42.097 }, 00:20:42.097 { 00:20:42.097 "name": null, 00:20:42.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.097 "is_configured": false, 00:20:42.097 "data_offset": 0, 00:20:42.097 "data_size": 63488 00:20:42.097 }, 00:20:42.097 { 00:20:42.097 "name": null, 00:20:42.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:42.097 "is_configured": false, 00:20:42.098 "data_offset": 2048, 00:20:42.098 "data_size": 63488 00:20:42.098 }, 00:20:42.098 { 00:20:42.098 "name": null, 00:20:42.098 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:42.098 "is_configured": false, 00:20:42.098 "data_offset": 2048, 00:20:42.098 "data_size": 63488 00:20:42.098 } 00:20:42.098 ] 00:20:42.098 }' 00:20:42.098 14:20:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.098 14:20:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.357 [2024-11-27 14:20:13.213281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:42.357 [2024-11-27 14:20:13.213414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.357 [2024-11-27 14:20:13.213453] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:42.357 [2024-11-27 14:20:13.213503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.357 [2024-11-27 14:20:13.214016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.357 [2024-11-27 14:20:13.214085] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:42.357 [2024-11-27 14:20:13.214221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:42.357 [2024-11-27 14:20:13.214278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:42.357 pt2 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.357 [2024-11-27 14:20:13.225214] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:42.357 [2024-11-27 14:20:13.225297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.357 [2024-11-27 14:20:13.225356] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:42.357 [2024-11-27 14:20:13.225395] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.357 [2024-11-27 14:20:13.225835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.357 [2024-11-27 14:20:13.225901] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:42.357 [2024-11-27 14:20:13.225985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:42.357 [2024-11-27 14:20:13.226016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:42.357 pt3 00:20:42.357 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.358 [2024-11-27 14:20:13.237170] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:42.358 [2024-11-27 14:20:13.237259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.358 [2024-11-27 14:20:13.237291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:42.358 [2024-11-27 14:20:13.237317] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.358 [2024-11-27 14:20:13.237700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.358 [2024-11-27 14:20:13.237759] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:42.358 [2024-11-27 14:20:13.237852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:42.358 [2024-11-27 14:20:13.237905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:42.358 [2024-11-27 14:20:13.238069] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:42.358 [2024-11-27 14:20:13.238107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:42.358 [2024-11-27 14:20:13.238384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:42.358 [2024-11-27 14:20:13.245984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:42.358 [2024-11-27 14:20:13.246041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:42.358 [2024-11-27 14:20:13.246263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.358 pt4 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.358 "name": "raid_bdev1", 00:20:42.358 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:42.358 "strip_size_kb": 64, 00:20:42.358 "state": "online", 00:20:42.358 "raid_level": "raid5f", 00:20:42.358 "superblock": true, 00:20:42.358 "num_base_bdevs": 4, 00:20:42.358 "num_base_bdevs_discovered": 4, 00:20:42.358 "num_base_bdevs_operational": 4, 00:20:42.358 "base_bdevs_list": [ 00:20:42.358 { 00:20:42.358 "name": "pt1", 00:20:42.358 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:42.358 "is_configured": true, 00:20:42.358 "data_offset": 2048, 00:20:42.358 "data_size": 63488 00:20:42.358 }, 00:20:42.358 { 00:20:42.358 "name": "pt2", 00:20:42.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.358 "is_configured": true, 00:20:42.358 "data_offset": 2048, 00:20:42.358 "data_size": 63488 00:20:42.358 }, 00:20:42.358 { 00:20:42.358 "name": "pt3", 00:20:42.358 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:42.358 "is_configured": true, 00:20:42.358 "data_offset": 2048, 00:20:42.358 "data_size": 63488 00:20:42.358 }, 00:20:42.358 { 00:20:42.358 "name": "pt4", 00:20:42.358 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:42.358 "is_configured": true, 00:20:42.358 "data_offset": 2048, 00:20:42.358 "data_size": 63488 00:20:42.358 } 00:20:42.358 ] 00:20:42.358 }' 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.358 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:42.925 [2024-11-27 14:20:13.686796] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.925 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:42.925 "name": "raid_bdev1", 00:20:42.925 "aliases": [ 00:20:42.925 "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d" 00:20:42.925 ], 00:20:42.925 "product_name": "Raid Volume", 00:20:42.925 "block_size": 512, 00:20:42.925 "num_blocks": 190464, 00:20:42.925 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:42.925 "assigned_rate_limits": { 00:20:42.925 "rw_ios_per_sec": 0, 00:20:42.925 "rw_mbytes_per_sec": 0, 00:20:42.925 "r_mbytes_per_sec": 0, 00:20:42.925 "w_mbytes_per_sec": 0 00:20:42.925 }, 00:20:42.925 "claimed": false, 00:20:42.925 "zoned": false, 00:20:42.925 "supported_io_types": { 00:20:42.925 "read": true, 00:20:42.925 "write": true, 00:20:42.925 "unmap": false, 00:20:42.925 "flush": false, 00:20:42.925 "reset": true, 00:20:42.925 "nvme_admin": false, 00:20:42.925 "nvme_io": false, 00:20:42.925 "nvme_io_md": false, 00:20:42.925 "write_zeroes": true, 00:20:42.925 "zcopy": false, 00:20:42.925 "get_zone_info": false, 00:20:42.925 "zone_management": false, 00:20:42.926 "zone_append": false, 00:20:42.926 "compare": false, 00:20:42.926 "compare_and_write": false, 00:20:42.926 "abort": false, 00:20:42.926 "seek_hole": false, 00:20:42.926 "seek_data": false, 00:20:42.926 "copy": false, 00:20:42.926 "nvme_iov_md": false 00:20:42.926 }, 00:20:42.926 "driver_specific": { 00:20:42.926 "raid": { 00:20:42.926 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:42.926 "strip_size_kb": 64, 00:20:42.926 "state": "online", 00:20:42.926 "raid_level": "raid5f", 00:20:42.926 "superblock": true, 00:20:42.926 "num_base_bdevs": 4, 00:20:42.926 "num_base_bdevs_discovered": 4, 00:20:42.926 "num_base_bdevs_operational": 4, 00:20:42.926 "base_bdevs_list": [ 00:20:42.926 { 00:20:42.926 "name": "pt1", 00:20:42.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:42.926 "is_configured": true, 00:20:42.926 "data_offset": 2048, 00:20:42.926 "data_size": 63488 00:20:42.926 }, 00:20:42.926 { 00:20:42.926 "name": "pt2", 00:20:42.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:42.926 "is_configured": true, 00:20:42.926 "data_offset": 2048, 00:20:42.926 "data_size": 63488 00:20:42.926 }, 00:20:42.926 { 00:20:42.926 "name": "pt3", 00:20:42.926 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:42.926 "is_configured": true, 00:20:42.926 "data_offset": 2048, 00:20:42.926 "data_size": 63488 00:20:42.926 }, 00:20:42.926 { 00:20:42.926 "name": "pt4", 00:20:42.926 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:42.926 "is_configured": true, 00:20:42.926 "data_offset": 2048, 00:20:42.926 "data_size": 63488 00:20:42.926 } 00:20:42.926 ] 00:20:42.926 } 00:20:42.926 } 00:20:42.926 }' 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:42.926 pt2 00:20:42.926 pt3 00:20:42.926 pt4' 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.926 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 14:20:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.185 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:43.185 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:43.185 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:43.185 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:43.185 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.185 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.185 [2024-11-27 14:20:14.014162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:43.185 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d '!=' 1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d ']' 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.186 [2024-11-27 14:20:14.061935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.186 "name": "raid_bdev1", 00:20:43.186 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:43.186 "strip_size_kb": 64, 00:20:43.186 "state": "online", 00:20:43.186 "raid_level": "raid5f", 00:20:43.186 "superblock": true, 00:20:43.186 "num_base_bdevs": 4, 00:20:43.186 "num_base_bdevs_discovered": 3, 00:20:43.186 "num_base_bdevs_operational": 3, 00:20:43.186 "base_bdevs_list": [ 00:20:43.186 { 00:20:43.186 "name": null, 00:20:43.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.186 "is_configured": false, 00:20:43.186 "data_offset": 0, 00:20:43.186 "data_size": 63488 00:20:43.186 }, 00:20:43.186 { 00:20:43.186 "name": "pt2", 00:20:43.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:43.186 "is_configured": true, 00:20:43.186 "data_offset": 2048, 00:20:43.186 "data_size": 63488 00:20:43.186 }, 00:20:43.186 { 00:20:43.186 "name": "pt3", 00:20:43.186 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:43.186 "is_configured": true, 00:20:43.186 "data_offset": 2048, 00:20:43.186 "data_size": 63488 00:20:43.186 }, 00:20:43.186 { 00:20:43.186 "name": "pt4", 00:20:43.186 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:43.186 "is_configured": true, 00:20:43.186 "data_offset": 2048, 00:20:43.186 "data_size": 63488 00:20:43.186 } 00:20:43.186 ] 00:20:43.186 }' 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.186 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.756 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:43.756 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.756 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.756 [2024-11-27 14:20:14.549106] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.756 [2024-11-27 14:20:14.549207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:43.757 [2024-11-27 14:20:14.549344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.757 [2024-11-27 14:20:14.549469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.757 [2024-11-27 14:20:14.549516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.757 [2024-11-27 14:20:14.636954] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:43.757 [2024-11-27 14:20:14.637015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.757 [2024-11-27 14:20:14.637034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:43.757 [2024-11-27 14:20:14.637044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.757 [2024-11-27 14:20:14.639411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.757 [2024-11-27 14:20:14.639489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:43.757 [2024-11-27 14:20:14.639582] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:43.757 [2024-11-27 14:20:14.639650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:43.757 pt2 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.757 "name": "raid_bdev1", 00:20:43.757 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:43.757 "strip_size_kb": 64, 00:20:43.757 "state": "configuring", 00:20:43.757 "raid_level": "raid5f", 00:20:43.757 "superblock": true, 00:20:43.757 "num_base_bdevs": 4, 00:20:43.757 "num_base_bdevs_discovered": 1, 00:20:43.757 "num_base_bdevs_operational": 3, 00:20:43.757 "base_bdevs_list": [ 00:20:43.757 { 00:20:43.757 "name": null, 00:20:43.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.757 "is_configured": false, 00:20:43.757 "data_offset": 2048, 00:20:43.757 "data_size": 63488 00:20:43.757 }, 00:20:43.757 { 00:20:43.757 "name": "pt2", 00:20:43.757 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:43.757 "is_configured": true, 00:20:43.757 "data_offset": 2048, 00:20:43.757 "data_size": 63488 00:20:43.757 }, 00:20:43.757 { 00:20:43.757 "name": null, 00:20:43.757 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:43.757 "is_configured": false, 00:20:43.757 "data_offset": 2048, 00:20:43.757 "data_size": 63488 00:20:43.757 }, 00:20:43.757 { 00:20:43.757 "name": null, 00:20:43.757 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:43.757 "is_configured": false, 00:20:43.757 "data_offset": 2048, 00:20:43.757 "data_size": 63488 00:20:43.757 } 00:20:43.757 ] 00:20:43.757 }' 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.757 14:20:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.327 [2024-11-27 14:20:15.032302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:44.327 [2024-11-27 14:20:15.032427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.327 [2024-11-27 14:20:15.032472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:44.327 [2024-11-27 14:20:15.032503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.327 [2024-11-27 14:20:15.032985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.327 [2024-11-27 14:20:15.033049] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:44.327 [2024-11-27 14:20:15.033188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:44.327 [2024-11-27 14:20:15.033241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:44.327 pt3 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.327 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.327 "name": "raid_bdev1", 00:20:44.327 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:44.327 "strip_size_kb": 64, 00:20:44.327 "state": "configuring", 00:20:44.327 "raid_level": "raid5f", 00:20:44.327 "superblock": true, 00:20:44.327 "num_base_bdevs": 4, 00:20:44.327 "num_base_bdevs_discovered": 2, 00:20:44.327 "num_base_bdevs_operational": 3, 00:20:44.327 "base_bdevs_list": [ 00:20:44.327 { 00:20:44.327 "name": null, 00:20:44.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.327 "is_configured": false, 00:20:44.327 "data_offset": 2048, 00:20:44.327 "data_size": 63488 00:20:44.327 }, 00:20:44.327 { 00:20:44.327 "name": "pt2", 00:20:44.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:44.327 "is_configured": true, 00:20:44.327 "data_offset": 2048, 00:20:44.327 "data_size": 63488 00:20:44.327 }, 00:20:44.327 { 00:20:44.327 "name": "pt3", 00:20:44.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:44.327 "is_configured": true, 00:20:44.327 "data_offset": 2048, 00:20:44.327 "data_size": 63488 00:20:44.327 }, 00:20:44.328 { 00:20:44.328 "name": null, 00:20:44.328 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:44.328 "is_configured": false, 00:20:44.328 "data_offset": 2048, 00:20:44.328 "data_size": 63488 00:20:44.328 } 00:20:44.328 ] 00:20:44.328 }' 00:20:44.328 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.328 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.899 [2024-11-27 14:20:15.551451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:44.899 [2024-11-27 14:20:15.551521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.899 [2024-11-27 14:20:15.551544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:44.899 [2024-11-27 14:20:15.551553] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.899 [2024-11-27 14:20:15.552037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.899 [2024-11-27 14:20:15.552057] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:44.899 [2024-11-27 14:20:15.552164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:44.899 [2024-11-27 14:20:15.552198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:44.899 [2024-11-27 14:20:15.552347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:44.899 [2024-11-27 14:20:15.552356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:44.899 [2024-11-27 14:20:15.552630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:44.899 [2024-11-27 14:20:15.560808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:44.899 [2024-11-27 14:20:15.560877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:44.899 [2024-11-27 14:20:15.561273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.899 pt4 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.899 "name": "raid_bdev1", 00:20:44.899 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:44.899 "strip_size_kb": 64, 00:20:44.899 "state": "online", 00:20:44.899 "raid_level": "raid5f", 00:20:44.899 "superblock": true, 00:20:44.899 "num_base_bdevs": 4, 00:20:44.899 "num_base_bdevs_discovered": 3, 00:20:44.899 "num_base_bdevs_operational": 3, 00:20:44.899 "base_bdevs_list": [ 00:20:44.899 { 00:20:44.899 "name": null, 00:20:44.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.899 "is_configured": false, 00:20:44.899 "data_offset": 2048, 00:20:44.899 "data_size": 63488 00:20:44.899 }, 00:20:44.899 { 00:20:44.899 "name": "pt2", 00:20:44.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:44.899 "is_configured": true, 00:20:44.899 "data_offset": 2048, 00:20:44.899 "data_size": 63488 00:20:44.899 }, 00:20:44.899 { 00:20:44.899 "name": "pt3", 00:20:44.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:44.899 "is_configured": true, 00:20:44.899 "data_offset": 2048, 00:20:44.899 "data_size": 63488 00:20:44.899 }, 00:20:44.899 { 00:20:44.899 "name": "pt4", 00:20:44.899 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:44.899 "is_configured": true, 00:20:44.899 "data_offset": 2048, 00:20:44.899 "data_size": 63488 00:20:44.899 } 00:20:44.899 ] 00:20:44.899 }' 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.899 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.159 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:45.159 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.159 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.160 [2024-11-27 14:20:15.986788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.160 [2024-11-27 14:20:15.986822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:45.160 [2024-11-27 14:20:15.986915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:45.160 [2024-11-27 14:20:15.987002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:45.160 [2024-11-27 14:20:15.987016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:45.160 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.160 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:45.160 14:20:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.160 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.160 14:20:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.160 [2024-11-27 14:20:16.062654] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:45.160 [2024-11-27 14:20:16.062788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.160 [2024-11-27 14:20:16.062855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:45.160 [2024-11-27 14:20:16.062876] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.160 [2024-11-27 14:20:16.065607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.160 [2024-11-27 14:20:16.065657] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:45.160 [2024-11-27 14:20:16.065763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:45.160 [2024-11-27 14:20:16.065821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:45.160 [2024-11-27 14:20:16.065982] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:45.160 [2024-11-27 14:20:16.065998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:45.160 [2024-11-27 14:20:16.066017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:45.160 [2024-11-27 14:20:16.066097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:45.160 [2024-11-27 14:20:16.066250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:45.160 pt1 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.160 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.420 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.421 "name": "raid_bdev1", 00:20:45.421 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:45.421 "strip_size_kb": 64, 00:20:45.421 "state": "configuring", 00:20:45.421 "raid_level": "raid5f", 00:20:45.421 "superblock": true, 00:20:45.421 "num_base_bdevs": 4, 00:20:45.421 "num_base_bdevs_discovered": 2, 00:20:45.421 "num_base_bdevs_operational": 3, 00:20:45.421 "base_bdevs_list": [ 00:20:45.421 { 00:20:45.421 "name": null, 00:20:45.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.421 "is_configured": false, 00:20:45.421 "data_offset": 2048, 00:20:45.421 "data_size": 63488 00:20:45.421 }, 00:20:45.421 { 00:20:45.421 "name": "pt2", 00:20:45.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:45.421 "is_configured": true, 00:20:45.421 "data_offset": 2048, 00:20:45.421 "data_size": 63488 00:20:45.421 }, 00:20:45.421 { 00:20:45.421 "name": "pt3", 00:20:45.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:45.421 "is_configured": true, 00:20:45.421 "data_offset": 2048, 00:20:45.421 "data_size": 63488 00:20:45.421 }, 00:20:45.421 { 00:20:45.421 "name": null, 00:20:45.421 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:45.421 "is_configured": false, 00:20:45.421 "data_offset": 2048, 00:20:45.421 "data_size": 63488 00:20:45.421 } 00:20:45.421 ] 00:20:45.421 }' 00:20:45.421 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.421 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.680 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:45.680 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:45.680 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.680 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.680 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.680 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:45.680 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:45.680 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.680 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.680 [2024-11-27 14:20:16.533917] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:45.680 [2024-11-27 14:20:16.534066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.680 [2024-11-27 14:20:16.534146] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:45.681 [2024-11-27 14:20:16.534192] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.681 [2024-11-27 14:20:16.534787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.681 [2024-11-27 14:20:16.534878] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:45.681 [2024-11-27 14:20:16.535000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:45.681 [2024-11-27 14:20:16.535033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:45.681 [2024-11-27 14:20:16.535237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:45.681 [2024-11-27 14:20:16.535251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:45.681 [2024-11-27 14:20:16.535576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:45.681 [2024-11-27 14:20:16.545214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:45.681 [2024-11-27 14:20:16.545287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:45.681 [2024-11-27 14:20:16.545621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.681 pt4 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.681 "name": "raid_bdev1", 00:20:45.681 "uuid": "1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d", 00:20:45.681 "strip_size_kb": 64, 00:20:45.681 "state": "online", 00:20:45.681 "raid_level": "raid5f", 00:20:45.681 "superblock": true, 00:20:45.681 "num_base_bdevs": 4, 00:20:45.681 "num_base_bdevs_discovered": 3, 00:20:45.681 "num_base_bdevs_operational": 3, 00:20:45.681 "base_bdevs_list": [ 00:20:45.681 { 00:20:45.681 "name": null, 00:20:45.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.681 "is_configured": false, 00:20:45.681 "data_offset": 2048, 00:20:45.681 "data_size": 63488 00:20:45.681 }, 00:20:45.681 { 00:20:45.681 "name": "pt2", 00:20:45.681 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:45.681 "is_configured": true, 00:20:45.681 "data_offset": 2048, 00:20:45.681 "data_size": 63488 00:20:45.681 }, 00:20:45.681 { 00:20:45.681 "name": "pt3", 00:20:45.681 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:45.681 "is_configured": true, 00:20:45.681 "data_offset": 2048, 00:20:45.681 "data_size": 63488 00:20:45.681 }, 00:20:45.681 { 00:20:45.681 "name": "pt4", 00:20:45.681 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:45.681 "is_configured": true, 00:20:45.681 "data_offset": 2048, 00:20:45.681 "data_size": 63488 00:20:45.681 } 00:20:45.681 ] 00:20:45.681 }' 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.681 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.274 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:46.274 14:20:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:46.274 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.274 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.274 14:20:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.274 14:20:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:46.274 14:20:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:46.274 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.274 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.274 14:20:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:46.274 [2024-11-27 14:20:17.019586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d '!=' 1fa4b4a6-3138-4c50-a1f1-f5a444a8a79d ']' 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84357 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84357 ']' 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84357 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84357 00:20:46.275 killing process with pid 84357 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84357' 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84357 00:20:46.275 [2024-11-27 14:20:17.107316] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:46.275 [2024-11-27 14:20:17.107415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.275 14:20:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84357 00:20:46.275 [2024-11-27 14:20:17.107499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:46.275 [2024-11-27 14:20:17.107516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:46.843 [2024-11-27 14:20:17.531238] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:47.782 14:20:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:47.782 ************************************ 00:20:47.782 END TEST raid5f_superblock_test 00:20:47.782 ************************************ 00:20:47.782 00:20:47.782 real 0m8.669s 00:20:47.782 user 0m13.658s 00:20:47.782 sys 0m1.543s 00:20:47.782 14:20:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.782 14:20:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.042 14:20:18 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:48.042 14:20:18 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:20:48.042 14:20:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:48.042 14:20:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.042 14:20:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:48.042 ************************************ 00:20:48.042 START TEST raid5f_rebuild_test 00:20:48.042 ************************************ 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84842 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84842 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84842 ']' 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:48.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.042 14:20:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.042 [2024-11-27 14:20:18.869814] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:20:48.042 [2024-11-27 14:20:18.870015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:20:48.042 Zero copy mechanism will not be used. 00:20:48.042 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84842 ] 00:20:48.311 [2024-11-27 14:20:19.042733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.311 [2024-11-27 14:20:19.168626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.572 [2024-11-27 14:20:19.380948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.572 [2024-11-27 14:20:19.381058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.832 BaseBdev1_malloc 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.832 [2024-11-27 14:20:19.773219] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:48.832 [2024-11-27 14:20:19.773331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.832 [2024-11-27 14:20:19.773373] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:48.832 [2024-11-27 14:20:19.773406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.832 [2024-11-27 14:20:19.775534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.832 [2024-11-27 14:20:19.775629] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:48.832 BaseBdev1 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.832 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 BaseBdev2_malloc 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 [2024-11-27 14:20:19.830886] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:49.092 [2024-11-27 14:20:19.830954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.092 [2024-11-27 14:20:19.830977] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:49.092 [2024-11-27 14:20:19.830988] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.092 [2024-11-27 14:20:19.833407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.092 [2024-11-27 14:20:19.833453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:49.092 BaseBdev2 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 BaseBdev3_malloc 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 [2024-11-27 14:20:19.900351] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:49.092 [2024-11-27 14:20:19.900409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.092 [2024-11-27 14:20:19.900432] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:49.092 [2024-11-27 14:20:19.900443] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.092 [2024-11-27 14:20:19.902680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.092 [2024-11-27 14:20:19.902730] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:49.092 BaseBdev3 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 BaseBdev4_malloc 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 [2024-11-27 14:20:19.957453] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:49.092 [2024-11-27 14:20:19.957521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.092 [2024-11-27 14:20:19.957543] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:49.092 [2024-11-27 14:20:19.957554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.092 [2024-11-27 14:20:19.959668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.092 [2024-11-27 14:20:19.959813] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:49.092 BaseBdev4 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.092 14:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.092 spare_malloc 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.093 spare_delay 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.093 [2024-11-27 14:20:20.028068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:49.093 [2024-11-27 14:20:20.028143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.093 [2024-11-27 14:20:20.028165] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:49.093 [2024-11-27 14:20:20.028177] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.093 [2024-11-27 14:20:20.030480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.093 [2024-11-27 14:20:20.030523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:49.093 spare 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.093 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.093 [2024-11-27 14:20:20.040105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:49.093 [2024-11-27 14:20:20.042150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:49.093 [2024-11-27 14:20:20.042218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:49.093 [2024-11-27 14:20:20.042273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:49.093 [2024-11-27 14:20:20.042367] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:49.093 [2024-11-27 14:20:20.042379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:49.093 [2024-11-27 14:20:20.042677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:49.352 [2024-11-27 14:20:20.050938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:49.352 [2024-11-27 14:20:20.051013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:49.352 [2024-11-27 14:20:20.051312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.352 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.352 "name": "raid_bdev1", 00:20:49.352 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:20:49.352 "strip_size_kb": 64, 00:20:49.352 "state": "online", 00:20:49.352 "raid_level": "raid5f", 00:20:49.352 "superblock": false, 00:20:49.352 "num_base_bdevs": 4, 00:20:49.352 "num_base_bdevs_discovered": 4, 00:20:49.352 "num_base_bdevs_operational": 4, 00:20:49.352 "base_bdevs_list": [ 00:20:49.352 { 00:20:49.352 "name": "BaseBdev1", 00:20:49.352 "uuid": "8228b13a-7dc3-53c9-a71d-edaee09615e1", 00:20:49.352 "is_configured": true, 00:20:49.352 "data_offset": 0, 00:20:49.353 "data_size": 65536 00:20:49.353 }, 00:20:49.353 { 00:20:49.353 "name": "BaseBdev2", 00:20:49.353 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:20:49.353 "is_configured": true, 00:20:49.353 "data_offset": 0, 00:20:49.353 "data_size": 65536 00:20:49.353 }, 00:20:49.353 { 00:20:49.353 "name": "BaseBdev3", 00:20:49.353 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:20:49.353 "is_configured": true, 00:20:49.353 "data_offset": 0, 00:20:49.353 "data_size": 65536 00:20:49.353 }, 00:20:49.353 { 00:20:49.353 "name": "BaseBdev4", 00:20:49.353 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:20:49.353 "is_configured": true, 00:20:49.353 "data_offset": 0, 00:20:49.353 "data_size": 65536 00:20:49.353 } 00:20:49.353 ] 00:20:49.353 }' 00:20:49.353 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.353 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.613 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:49.613 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.613 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.613 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:49.613 [2024-11-27 14:20:20.527763] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:49.613 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.872 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:49.873 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:49.873 [2024-11-27 14:20:20.815140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:50.132 /dev/nbd0 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:50.132 1+0 records in 00:20:50.132 1+0 records out 00:20:50.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532483 s, 7.7 MB/s 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:50.132 14:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:20:50.700 512+0 records in 00:20:50.700 512+0 records out 00:20:50.700 100663296 bytes (101 MB, 96 MiB) copied, 0.501758 s, 201 MB/s 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:50.700 [2024-11-27 14:20:21.626500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.700 14:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.700 [2024-11-27 14:20:21.649594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.960 "name": "raid_bdev1", 00:20:50.960 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:20:50.960 "strip_size_kb": 64, 00:20:50.960 "state": "online", 00:20:50.960 "raid_level": "raid5f", 00:20:50.960 "superblock": false, 00:20:50.960 "num_base_bdevs": 4, 00:20:50.960 "num_base_bdevs_discovered": 3, 00:20:50.960 "num_base_bdevs_operational": 3, 00:20:50.960 "base_bdevs_list": [ 00:20:50.960 { 00:20:50.960 "name": null, 00:20:50.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.960 "is_configured": false, 00:20:50.960 "data_offset": 0, 00:20:50.960 "data_size": 65536 00:20:50.960 }, 00:20:50.960 { 00:20:50.960 "name": "BaseBdev2", 00:20:50.960 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:20:50.960 "is_configured": true, 00:20:50.960 "data_offset": 0, 00:20:50.960 "data_size": 65536 00:20:50.960 }, 00:20:50.960 { 00:20:50.960 "name": "BaseBdev3", 00:20:50.960 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:20:50.960 "is_configured": true, 00:20:50.960 "data_offset": 0, 00:20:50.960 "data_size": 65536 00:20:50.960 }, 00:20:50.960 { 00:20:50.960 "name": "BaseBdev4", 00:20:50.960 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:20:50.960 "is_configured": true, 00:20:50.960 "data_offset": 0, 00:20:50.960 "data_size": 65536 00:20:50.960 } 00:20:50.960 ] 00:20:50.960 }' 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.960 14:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.219 14:20:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:51.219 14:20:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.219 14:20:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.219 [2024-11-27 14:20:22.116803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:51.220 [2024-11-27 14:20:22.135164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:51.220 14:20:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.220 14:20:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:51.220 [2024-11-27 14:20:22.146542] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.601 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.601 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.601 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.601 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.601 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.601 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.601 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.601 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.601 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.601 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.601 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.601 "name": "raid_bdev1", 00:20:52.601 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:20:52.601 "strip_size_kb": 64, 00:20:52.601 "state": "online", 00:20:52.601 "raid_level": "raid5f", 00:20:52.601 "superblock": false, 00:20:52.601 "num_base_bdevs": 4, 00:20:52.601 "num_base_bdevs_discovered": 4, 00:20:52.601 "num_base_bdevs_operational": 4, 00:20:52.601 "process": { 00:20:52.601 "type": "rebuild", 00:20:52.601 "target": "spare", 00:20:52.601 "progress": { 00:20:52.601 "blocks": 17280, 00:20:52.601 "percent": 8 00:20:52.601 } 00:20:52.601 }, 00:20:52.601 "base_bdevs_list": [ 00:20:52.601 { 00:20:52.601 "name": "spare", 00:20:52.601 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:20:52.601 "is_configured": true, 00:20:52.601 "data_offset": 0, 00:20:52.601 "data_size": 65536 00:20:52.601 }, 00:20:52.602 { 00:20:52.602 "name": "BaseBdev2", 00:20:52.602 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:20:52.602 "is_configured": true, 00:20:52.602 "data_offset": 0, 00:20:52.602 "data_size": 65536 00:20:52.602 }, 00:20:52.602 { 00:20:52.602 "name": "BaseBdev3", 00:20:52.602 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:20:52.602 "is_configured": true, 00:20:52.602 "data_offset": 0, 00:20:52.602 "data_size": 65536 00:20:52.602 }, 00:20:52.602 { 00:20:52.602 "name": "BaseBdev4", 00:20:52.602 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:20:52.602 "is_configured": true, 00:20:52.602 "data_offset": 0, 00:20:52.602 "data_size": 65536 00:20:52.602 } 00:20:52.602 ] 00:20:52.602 }' 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.602 [2024-11-27 14:20:23.286179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:52.602 [2024-11-27 14:20:23.356287] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:52.602 [2024-11-27 14:20:23.356454] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.602 [2024-11-27 14:20:23.356477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:52.602 [2024-11-27 14:20:23.356494] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.602 "name": "raid_bdev1", 00:20:52.602 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:20:52.602 "strip_size_kb": 64, 00:20:52.602 "state": "online", 00:20:52.602 "raid_level": "raid5f", 00:20:52.602 "superblock": false, 00:20:52.602 "num_base_bdevs": 4, 00:20:52.602 "num_base_bdevs_discovered": 3, 00:20:52.602 "num_base_bdevs_operational": 3, 00:20:52.602 "base_bdevs_list": [ 00:20:52.602 { 00:20:52.602 "name": null, 00:20:52.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.602 "is_configured": false, 00:20:52.602 "data_offset": 0, 00:20:52.602 "data_size": 65536 00:20:52.602 }, 00:20:52.602 { 00:20:52.602 "name": "BaseBdev2", 00:20:52.602 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:20:52.602 "is_configured": true, 00:20:52.602 "data_offset": 0, 00:20:52.602 "data_size": 65536 00:20:52.602 }, 00:20:52.602 { 00:20:52.602 "name": "BaseBdev3", 00:20:52.602 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:20:52.602 "is_configured": true, 00:20:52.602 "data_offset": 0, 00:20:52.602 "data_size": 65536 00:20:52.602 }, 00:20:52.602 { 00:20:52.602 "name": "BaseBdev4", 00:20:52.602 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:20:52.602 "is_configured": true, 00:20:52.602 "data_offset": 0, 00:20:52.602 "data_size": 65536 00:20:52.602 } 00:20:52.602 ] 00:20:52.602 }' 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.602 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.170 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:53.170 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.170 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:53.170 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.171 "name": "raid_bdev1", 00:20:53.171 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:20:53.171 "strip_size_kb": 64, 00:20:53.171 "state": "online", 00:20:53.171 "raid_level": "raid5f", 00:20:53.171 "superblock": false, 00:20:53.171 "num_base_bdevs": 4, 00:20:53.171 "num_base_bdevs_discovered": 3, 00:20:53.171 "num_base_bdevs_operational": 3, 00:20:53.171 "base_bdevs_list": [ 00:20:53.171 { 00:20:53.171 "name": null, 00:20:53.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.171 "is_configured": false, 00:20:53.171 "data_offset": 0, 00:20:53.171 "data_size": 65536 00:20:53.171 }, 00:20:53.171 { 00:20:53.171 "name": "BaseBdev2", 00:20:53.171 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:20:53.171 "is_configured": true, 00:20:53.171 "data_offset": 0, 00:20:53.171 "data_size": 65536 00:20:53.171 }, 00:20:53.171 { 00:20:53.171 "name": "BaseBdev3", 00:20:53.171 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:20:53.171 "is_configured": true, 00:20:53.171 "data_offset": 0, 00:20:53.171 "data_size": 65536 00:20:53.171 }, 00:20:53.171 { 00:20:53.171 "name": "BaseBdev4", 00:20:53.171 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:20:53.171 "is_configured": true, 00:20:53.171 "data_offset": 0, 00:20:53.171 "data_size": 65536 00:20:53.171 } 00:20:53.171 ] 00:20:53.171 }' 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.171 14:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.171 [2024-11-27 14:20:23.997051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:53.171 [2024-11-27 14:20:24.015167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:20:53.171 14:20:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.171 14:20:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:53.171 [2024-11-27 14:20:24.026207] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.136 "name": "raid_bdev1", 00:20:54.136 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:20:54.136 "strip_size_kb": 64, 00:20:54.136 "state": "online", 00:20:54.136 "raid_level": "raid5f", 00:20:54.136 "superblock": false, 00:20:54.136 "num_base_bdevs": 4, 00:20:54.136 "num_base_bdevs_discovered": 4, 00:20:54.136 "num_base_bdevs_operational": 4, 00:20:54.136 "process": { 00:20:54.136 "type": "rebuild", 00:20:54.136 "target": "spare", 00:20:54.136 "progress": { 00:20:54.136 "blocks": 17280, 00:20:54.136 "percent": 8 00:20:54.136 } 00:20:54.136 }, 00:20:54.136 "base_bdevs_list": [ 00:20:54.136 { 00:20:54.136 "name": "spare", 00:20:54.136 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:20:54.136 "is_configured": true, 00:20:54.136 "data_offset": 0, 00:20:54.136 "data_size": 65536 00:20:54.136 }, 00:20:54.136 { 00:20:54.136 "name": "BaseBdev2", 00:20:54.136 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:20:54.136 "is_configured": true, 00:20:54.136 "data_offset": 0, 00:20:54.136 "data_size": 65536 00:20:54.136 }, 00:20:54.136 { 00:20:54.136 "name": "BaseBdev3", 00:20:54.136 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:20:54.136 "is_configured": true, 00:20:54.136 "data_offset": 0, 00:20:54.136 "data_size": 65536 00:20:54.136 }, 00:20:54.136 { 00:20:54.136 "name": "BaseBdev4", 00:20:54.136 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:20:54.136 "is_configured": true, 00:20:54.136 "data_offset": 0, 00:20:54.136 "data_size": 65536 00:20:54.136 } 00:20:54.136 ] 00:20:54.136 }' 00:20:54.136 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=633 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.397 "name": "raid_bdev1", 00:20:54.397 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:20:54.397 "strip_size_kb": 64, 00:20:54.397 "state": "online", 00:20:54.397 "raid_level": "raid5f", 00:20:54.397 "superblock": false, 00:20:54.397 "num_base_bdevs": 4, 00:20:54.397 "num_base_bdevs_discovered": 4, 00:20:54.397 "num_base_bdevs_operational": 4, 00:20:54.397 "process": { 00:20:54.397 "type": "rebuild", 00:20:54.397 "target": "spare", 00:20:54.397 "progress": { 00:20:54.397 "blocks": 21120, 00:20:54.397 "percent": 10 00:20:54.397 } 00:20:54.397 }, 00:20:54.397 "base_bdevs_list": [ 00:20:54.397 { 00:20:54.397 "name": "spare", 00:20:54.397 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:20:54.397 "is_configured": true, 00:20:54.397 "data_offset": 0, 00:20:54.397 "data_size": 65536 00:20:54.397 }, 00:20:54.397 { 00:20:54.397 "name": "BaseBdev2", 00:20:54.397 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:20:54.397 "is_configured": true, 00:20:54.397 "data_offset": 0, 00:20:54.397 "data_size": 65536 00:20:54.397 }, 00:20:54.397 { 00:20:54.397 "name": "BaseBdev3", 00:20:54.397 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:20:54.397 "is_configured": true, 00:20:54.397 "data_offset": 0, 00:20:54.397 "data_size": 65536 00:20:54.397 }, 00:20:54.397 { 00:20:54.397 "name": "BaseBdev4", 00:20:54.397 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:20:54.397 "is_configured": true, 00:20:54.397 "data_offset": 0, 00:20:54.397 "data_size": 65536 00:20:54.397 } 00:20:54.397 ] 00:20:54.397 }' 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.397 14:20:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.776 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.776 "name": "raid_bdev1", 00:20:55.776 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:20:55.776 "strip_size_kb": 64, 00:20:55.776 "state": "online", 00:20:55.776 "raid_level": "raid5f", 00:20:55.776 "superblock": false, 00:20:55.776 "num_base_bdevs": 4, 00:20:55.776 "num_base_bdevs_discovered": 4, 00:20:55.776 "num_base_bdevs_operational": 4, 00:20:55.776 "process": { 00:20:55.776 "type": "rebuild", 00:20:55.776 "target": "spare", 00:20:55.776 "progress": { 00:20:55.776 "blocks": 44160, 00:20:55.776 "percent": 22 00:20:55.776 } 00:20:55.776 }, 00:20:55.776 "base_bdevs_list": [ 00:20:55.776 { 00:20:55.776 "name": "spare", 00:20:55.776 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:20:55.776 "is_configured": true, 00:20:55.776 "data_offset": 0, 00:20:55.776 "data_size": 65536 00:20:55.776 }, 00:20:55.776 { 00:20:55.776 "name": "BaseBdev2", 00:20:55.776 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:20:55.776 "is_configured": true, 00:20:55.776 "data_offset": 0, 00:20:55.776 "data_size": 65536 00:20:55.776 }, 00:20:55.776 { 00:20:55.776 "name": "BaseBdev3", 00:20:55.776 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:20:55.776 "is_configured": true, 00:20:55.776 "data_offset": 0, 00:20:55.776 "data_size": 65536 00:20:55.776 }, 00:20:55.776 { 00:20:55.777 "name": "BaseBdev4", 00:20:55.777 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:20:55.777 "is_configured": true, 00:20:55.777 "data_offset": 0, 00:20:55.777 "data_size": 65536 00:20:55.777 } 00:20:55.777 ] 00:20:55.777 }' 00:20:55.777 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.777 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.777 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.777 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.777 14:20:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.715 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:56.715 "name": "raid_bdev1", 00:20:56.715 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:20:56.715 "strip_size_kb": 64, 00:20:56.715 "state": "online", 00:20:56.715 "raid_level": "raid5f", 00:20:56.715 "superblock": false, 00:20:56.715 "num_base_bdevs": 4, 00:20:56.716 "num_base_bdevs_discovered": 4, 00:20:56.716 "num_base_bdevs_operational": 4, 00:20:56.716 "process": { 00:20:56.716 "type": "rebuild", 00:20:56.716 "target": "spare", 00:20:56.716 "progress": { 00:20:56.716 "blocks": 65280, 00:20:56.716 "percent": 33 00:20:56.716 } 00:20:56.716 }, 00:20:56.716 "base_bdevs_list": [ 00:20:56.716 { 00:20:56.716 "name": "spare", 00:20:56.716 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:20:56.716 "is_configured": true, 00:20:56.716 "data_offset": 0, 00:20:56.716 "data_size": 65536 00:20:56.716 }, 00:20:56.716 { 00:20:56.716 "name": "BaseBdev2", 00:20:56.716 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:20:56.716 "is_configured": true, 00:20:56.716 "data_offset": 0, 00:20:56.716 "data_size": 65536 00:20:56.716 }, 00:20:56.716 { 00:20:56.716 "name": "BaseBdev3", 00:20:56.716 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:20:56.716 "is_configured": true, 00:20:56.716 "data_offset": 0, 00:20:56.716 "data_size": 65536 00:20:56.716 }, 00:20:56.716 { 00:20:56.716 "name": "BaseBdev4", 00:20:56.716 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:20:56.716 "is_configured": true, 00:20:56.716 "data_offset": 0, 00:20:56.716 "data_size": 65536 00:20:56.716 } 00:20:56.716 ] 00:20:56.716 }' 00:20:56.716 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:56.716 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:56.716 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:56.716 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.716 14:20:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.094 "name": "raid_bdev1", 00:20:58.094 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:20:58.094 "strip_size_kb": 64, 00:20:58.094 "state": "online", 00:20:58.094 "raid_level": "raid5f", 00:20:58.094 "superblock": false, 00:20:58.094 "num_base_bdevs": 4, 00:20:58.094 "num_base_bdevs_discovered": 4, 00:20:58.094 "num_base_bdevs_operational": 4, 00:20:58.094 "process": { 00:20:58.094 "type": "rebuild", 00:20:58.094 "target": "spare", 00:20:58.094 "progress": { 00:20:58.094 "blocks": 86400, 00:20:58.094 "percent": 43 00:20:58.094 } 00:20:58.094 }, 00:20:58.094 "base_bdevs_list": [ 00:20:58.094 { 00:20:58.094 "name": "spare", 00:20:58.094 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:20:58.094 "is_configured": true, 00:20:58.094 "data_offset": 0, 00:20:58.094 "data_size": 65536 00:20:58.094 }, 00:20:58.094 { 00:20:58.094 "name": "BaseBdev2", 00:20:58.094 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:20:58.094 "is_configured": true, 00:20:58.094 "data_offset": 0, 00:20:58.094 "data_size": 65536 00:20:58.094 }, 00:20:58.094 { 00:20:58.094 "name": "BaseBdev3", 00:20:58.094 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:20:58.094 "is_configured": true, 00:20:58.094 "data_offset": 0, 00:20:58.094 "data_size": 65536 00:20:58.094 }, 00:20:58.094 { 00:20:58.094 "name": "BaseBdev4", 00:20:58.094 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:20:58.094 "is_configured": true, 00:20:58.094 "data_offset": 0, 00:20:58.094 "data_size": 65536 00:20:58.094 } 00:20:58.094 ] 00:20:58.094 }' 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.094 14:20:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.031 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.031 "name": "raid_bdev1", 00:20:59.031 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:20:59.031 "strip_size_kb": 64, 00:20:59.031 "state": "online", 00:20:59.031 "raid_level": "raid5f", 00:20:59.031 "superblock": false, 00:20:59.031 "num_base_bdevs": 4, 00:20:59.031 "num_base_bdevs_discovered": 4, 00:20:59.031 "num_base_bdevs_operational": 4, 00:20:59.031 "process": { 00:20:59.031 "type": "rebuild", 00:20:59.031 "target": "spare", 00:20:59.031 "progress": { 00:20:59.031 "blocks": 109440, 00:20:59.031 "percent": 55 00:20:59.031 } 00:20:59.031 }, 00:20:59.031 "base_bdevs_list": [ 00:20:59.031 { 00:20:59.031 "name": "spare", 00:20:59.031 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:20:59.031 "is_configured": true, 00:20:59.031 "data_offset": 0, 00:20:59.031 "data_size": 65536 00:20:59.031 }, 00:20:59.031 { 00:20:59.031 "name": "BaseBdev2", 00:20:59.031 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:20:59.031 "is_configured": true, 00:20:59.031 "data_offset": 0, 00:20:59.031 "data_size": 65536 00:20:59.031 }, 00:20:59.031 { 00:20:59.031 "name": "BaseBdev3", 00:20:59.031 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:20:59.031 "is_configured": true, 00:20:59.031 "data_offset": 0, 00:20:59.031 "data_size": 65536 00:20:59.031 }, 00:20:59.031 { 00:20:59.031 "name": "BaseBdev4", 00:20:59.031 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:20:59.031 "is_configured": true, 00:20:59.031 "data_offset": 0, 00:20:59.031 "data_size": 65536 00:20:59.031 } 00:20:59.032 ] 00:20:59.032 }' 00:20:59.032 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.032 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.032 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.032 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.032 14:20:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.057 "name": "raid_bdev1", 00:21:00.057 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:21:00.057 "strip_size_kb": 64, 00:21:00.057 "state": "online", 00:21:00.057 "raid_level": "raid5f", 00:21:00.057 "superblock": false, 00:21:00.057 "num_base_bdevs": 4, 00:21:00.057 "num_base_bdevs_discovered": 4, 00:21:00.057 "num_base_bdevs_operational": 4, 00:21:00.057 "process": { 00:21:00.057 "type": "rebuild", 00:21:00.057 "target": "spare", 00:21:00.057 "progress": { 00:21:00.057 "blocks": 130560, 00:21:00.057 "percent": 66 00:21:00.057 } 00:21:00.057 }, 00:21:00.057 "base_bdevs_list": [ 00:21:00.057 { 00:21:00.057 "name": "spare", 00:21:00.057 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:21:00.057 "is_configured": true, 00:21:00.057 "data_offset": 0, 00:21:00.057 "data_size": 65536 00:21:00.057 }, 00:21:00.057 { 00:21:00.057 "name": "BaseBdev2", 00:21:00.057 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:21:00.057 "is_configured": true, 00:21:00.057 "data_offset": 0, 00:21:00.057 "data_size": 65536 00:21:00.057 }, 00:21:00.057 { 00:21:00.057 "name": "BaseBdev3", 00:21:00.057 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:21:00.057 "is_configured": true, 00:21:00.057 "data_offset": 0, 00:21:00.057 "data_size": 65536 00:21:00.057 }, 00:21:00.057 { 00:21:00.057 "name": "BaseBdev4", 00:21:00.057 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:21:00.057 "is_configured": true, 00:21:00.057 "data_offset": 0, 00:21:00.057 "data_size": 65536 00:21:00.057 } 00:21:00.057 ] 00:21:00.057 }' 00:21:00.057 14:20:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.057 14:20:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.317 14:20:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.317 14:20:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.317 14:20:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.258 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.258 "name": "raid_bdev1", 00:21:01.258 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:21:01.258 "strip_size_kb": 64, 00:21:01.258 "state": "online", 00:21:01.258 "raid_level": "raid5f", 00:21:01.258 "superblock": false, 00:21:01.258 "num_base_bdevs": 4, 00:21:01.258 "num_base_bdevs_discovered": 4, 00:21:01.258 "num_base_bdevs_operational": 4, 00:21:01.258 "process": { 00:21:01.258 "type": "rebuild", 00:21:01.258 "target": "spare", 00:21:01.258 "progress": { 00:21:01.258 "blocks": 151680, 00:21:01.258 "percent": 77 00:21:01.258 } 00:21:01.258 }, 00:21:01.258 "base_bdevs_list": [ 00:21:01.258 { 00:21:01.258 "name": "spare", 00:21:01.258 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:21:01.258 "is_configured": true, 00:21:01.258 "data_offset": 0, 00:21:01.258 "data_size": 65536 00:21:01.258 }, 00:21:01.258 { 00:21:01.258 "name": "BaseBdev2", 00:21:01.258 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:21:01.258 "is_configured": true, 00:21:01.258 "data_offset": 0, 00:21:01.258 "data_size": 65536 00:21:01.258 }, 00:21:01.258 { 00:21:01.258 "name": "BaseBdev3", 00:21:01.258 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:21:01.258 "is_configured": true, 00:21:01.258 "data_offset": 0, 00:21:01.258 "data_size": 65536 00:21:01.258 }, 00:21:01.258 { 00:21:01.258 "name": "BaseBdev4", 00:21:01.258 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:21:01.258 "is_configured": true, 00:21:01.258 "data_offset": 0, 00:21:01.258 "data_size": 65536 00:21:01.258 } 00:21:01.258 ] 00:21:01.259 }' 00:21:01.259 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.259 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:01.259 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.259 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.259 14:20:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.641 "name": "raid_bdev1", 00:21:02.641 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:21:02.641 "strip_size_kb": 64, 00:21:02.641 "state": "online", 00:21:02.641 "raid_level": "raid5f", 00:21:02.641 "superblock": false, 00:21:02.641 "num_base_bdevs": 4, 00:21:02.641 "num_base_bdevs_discovered": 4, 00:21:02.641 "num_base_bdevs_operational": 4, 00:21:02.641 "process": { 00:21:02.641 "type": "rebuild", 00:21:02.641 "target": "spare", 00:21:02.641 "progress": { 00:21:02.641 "blocks": 174720, 00:21:02.641 "percent": 88 00:21:02.641 } 00:21:02.641 }, 00:21:02.641 "base_bdevs_list": [ 00:21:02.641 { 00:21:02.641 "name": "spare", 00:21:02.641 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:21:02.641 "is_configured": true, 00:21:02.641 "data_offset": 0, 00:21:02.641 "data_size": 65536 00:21:02.641 }, 00:21:02.641 { 00:21:02.641 "name": "BaseBdev2", 00:21:02.641 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:21:02.641 "is_configured": true, 00:21:02.641 "data_offset": 0, 00:21:02.641 "data_size": 65536 00:21:02.641 }, 00:21:02.641 { 00:21:02.641 "name": "BaseBdev3", 00:21:02.641 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:21:02.641 "is_configured": true, 00:21:02.641 "data_offset": 0, 00:21:02.641 "data_size": 65536 00:21:02.641 }, 00:21:02.641 { 00:21:02.641 "name": "BaseBdev4", 00:21:02.641 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:21:02.641 "is_configured": true, 00:21:02.641 "data_offset": 0, 00:21:02.641 "data_size": 65536 00:21:02.641 } 00:21:02.641 ] 00:21:02.641 }' 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:02.641 14:20:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.581 [2024-11-27 14:20:34.407252] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:03.581 [2024-11-27 14:20:34.407421] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:03.581 [2024-11-27 14:20:34.407510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:03.581 "name": "raid_bdev1", 00:21:03.581 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:21:03.581 "strip_size_kb": 64, 00:21:03.581 "state": "online", 00:21:03.581 "raid_level": "raid5f", 00:21:03.581 "superblock": false, 00:21:03.581 "num_base_bdevs": 4, 00:21:03.581 "num_base_bdevs_discovered": 4, 00:21:03.581 "num_base_bdevs_operational": 4, 00:21:03.581 "process": { 00:21:03.581 "type": "rebuild", 00:21:03.581 "target": "spare", 00:21:03.581 "progress": { 00:21:03.581 "blocks": 195840, 00:21:03.581 "percent": 99 00:21:03.581 } 00:21:03.581 }, 00:21:03.581 "base_bdevs_list": [ 00:21:03.581 { 00:21:03.581 "name": "spare", 00:21:03.581 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:21:03.581 "is_configured": true, 00:21:03.581 "data_offset": 0, 00:21:03.581 "data_size": 65536 00:21:03.581 }, 00:21:03.581 { 00:21:03.581 "name": "BaseBdev2", 00:21:03.581 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:21:03.581 "is_configured": true, 00:21:03.581 "data_offset": 0, 00:21:03.581 "data_size": 65536 00:21:03.581 }, 00:21:03.581 { 00:21:03.581 "name": "BaseBdev3", 00:21:03.581 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:21:03.581 "is_configured": true, 00:21:03.581 "data_offset": 0, 00:21:03.581 "data_size": 65536 00:21:03.581 }, 00:21:03.581 { 00:21:03.581 "name": "BaseBdev4", 00:21:03.581 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:21:03.581 "is_configured": true, 00:21:03.581 "data_offset": 0, 00:21:03.581 "data_size": 65536 00:21:03.581 } 00:21:03.581 ] 00:21:03.581 }' 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.581 14:20:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.962 "name": "raid_bdev1", 00:21:04.962 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:21:04.962 "strip_size_kb": 64, 00:21:04.962 "state": "online", 00:21:04.962 "raid_level": "raid5f", 00:21:04.962 "superblock": false, 00:21:04.962 "num_base_bdevs": 4, 00:21:04.962 "num_base_bdevs_discovered": 4, 00:21:04.962 "num_base_bdevs_operational": 4, 00:21:04.962 "base_bdevs_list": [ 00:21:04.962 { 00:21:04.962 "name": "spare", 00:21:04.962 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:21:04.962 "is_configured": true, 00:21:04.962 "data_offset": 0, 00:21:04.962 "data_size": 65536 00:21:04.962 }, 00:21:04.962 { 00:21:04.962 "name": "BaseBdev2", 00:21:04.962 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:21:04.962 "is_configured": true, 00:21:04.962 "data_offset": 0, 00:21:04.962 "data_size": 65536 00:21:04.962 }, 00:21:04.962 { 00:21:04.962 "name": "BaseBdev3", 00:21:04.962 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:21:04.962 "is_configured": true, 00:21:04.962 "data_offset": 0, 00:21:04.962 "data_size": 65536 00:21:04.962 }, 00:21:04.962 { 00:21:04.962 "name": "BaseBdev4", 00:21:04.962 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:21:04.962 "is_configured": true, 00:21:04.962 "data_offset": 0, 00:21:04.962 "data_size": 65536 00:21:04.962 } 00:21:04.962 ] 00:21:04.962 }' 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:04.962 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.963 "name": "raid_bdev1", 00:21:04.963 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:21:04.963 "strip_size_kb": 64, 00:21:04.963 "state": "online", 00:21:04.963 "raid_level": "raid5f", 00:21:04.963 "superblock": false, 00:21:04.963 "num_base_bdevs": 4, 00:21:04.963 "num_base_bdevs_discovered": 4, 00:21:04.963 "num_base_bdevs_operational": 4, 00:21:04.963 "base_bdevs_list": [ 00:21:04.963 { 00:21:04.963 "name": "spare", 00:21:04.963 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:21:04.963 "is_configured": true, 00:21:04.963 "data_offset": 0, 00:21:04.963 "data_size": 65536 00:21:04.963 }, 00:21:04.963 { 00:21:04.963 "name": "BaseBdev2", 00:21:04.963 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:21:04.963 "is_configured": true, 00:21:04.963 "data_offset": 0, 00:21:04.963 "data_size": 65536 00:21:04.963 }, 00:21:04.963 { 00:21:04.963 "name": "BaseBdev3", 00:21:04.963 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:21:04.963 "is_configured": true, 00:21:04.963 "data_offset": 0, 00:21:04.963 "data_size": 65536 00:21:04.963 }, 00:21:04.963 { 00:21:04.963 "name": "BaseBdev4", 00:21:04.963 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:21:04.963 "is_configured": true, 00:21:04.963 "data_offset": 0, 00:21:04.963 "data_size": 65536 00:21:04.963 } 00:21:04.963 ] 00:21:04.963 }' 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.963 "name": "raid_bdev1", 00:21:04.963 "uuid": "2746d51c-0e3b-49b8-abe7-2dbbbdaf03c5", 00:21:04.963 "strip_size_kb": 64, 00:21:04.963 "state": "online", 00:21:04.963 "raid_level": "raid5f", 00:21:04.963 "superblock": false, 00:21:04.963 "num_base_bdevs": 4, 00:21:04.963 "num_base_bdevs_discovered": 4, 00:21:04.963 "num_base_bdevs_operational": 4, 00:21:04.963 "base_bdevs_list": [ 00:21:04.963 { 00:21:04.963 "name": "spare", 00:21:04.963 "uuid": "1e58623e-a3f8-5874-9526-cb3146e25672", 00:21:04.963 "is_configured": true, 00:21:04.963 "data_offset": 0, 00:21:04.963 "data_size": 65536 00:21:04.963 }, 00:21:04.963 { 00:21:04.963 "name": "BaseBdev2", 00:21:04.963 "uuid": "c7d2c4f1-c656-5714-b3b2-d11c0d807f66", 00:21:04.963 "is_configured": true, 00:21:04.963 "data_offset": 0, 00:21:04.963 "data_size": 65536 00:21:04.963 }, 00:21:04.963 { 00:21:04.963 "name": "BaseBdev3", 00:21:04.963 "uuid": "0ad04ef8-b9d0-5b13-ae27-7decfaabfa8e", 00:21:04.963 "is_configured": true, 00:21:04.963 "data_offset": 0, 00:21:04.963 "data_size": 65536 00:21:04.963 }, 00:21:04.963 { 00:21:04.963 "name": "BaseBdev4", 00:21:04.963 "uuid": "e05aff86-22c6-513b-99b8-47555ae55bc9", 00:21:04.963 "is_configured": true, 00:21:04.963 "data_offset": 0, 00:21:04.963 "data_size": 65536 00:21:04.963 } 00:21:04.963 ] 00:21:04.963 }' 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.963 14:20:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.531 [2024-11-27 14:20:36.250930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.531 [2024-11-27 14:20:36.250970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.531 [2024-11-27 14:20:36.251092] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.531 [2024-11-27 14:20:36.251217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.531 [2024-11-27 14:20:36.251230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:05.531 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:05.790 /dev/nbd0 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:05.790 1+0 records in 00:21:05.790 1+0 records out 00:21:05.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543653 s, 7.5 MB/s 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:05.790 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:06.050 /dev/nbd1 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:06.050 1+0 records in 00:21:06.050 1+0 records out 00:21:06.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468809 s, 8.7 MB/s 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:06.050 14:20:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:06.309 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:06.309 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:06.309 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:06.309 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:06.309 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:06.309 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.309 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:06.568 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:06.568 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:06.568 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:06.568 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.568 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.568 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:06.568 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:06.568 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.568 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:06.568 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84842 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84842 ']' 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84842 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84842 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84842' 00:21:06.915 killing process with pid 84842 00:21:06.915 Received shutdown signal, test time was about 60.000000 seconds 00:21:06.915 00:21:06.915 Latency(us) 00:21:06.915 [2024-11-27T14:20:37.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.915 [2024-11-27T14:20:37.871Z] =================================================================================================================== 00:21:06.915 [2024-11-27T14:20:37.871Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84842 00:21:06.915 14:20:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84842 00:21:06.915 [2024-11-27 14:20:37.630487] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:07.497 [2024-11-27 14:20:38.167207] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:08.449 14:20:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:21:08.449 00:21:08.449 real 0m20.568s 00:21:08.449 user 0m24.686s 00:21:08.449 sys 0m2.346s 00:21:08.449 14:20:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:08.449 ************************************ 00:21:08.449 END TEST raid5f_rebuild_test 00:21:08.449 ************************************ 00:21:08.449 14:20:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.449 14:20:39 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:21:08.449 14:20:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:08.449 14:20:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:08.449 14:20:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:08.709 ************************************ 00:21:08.709 START TEST raid5f_rebuild_test_sb 00:21:08.709 ************************************ 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85369 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85369 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85369 ']' 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.709 14:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.709 [2024-11-27 14:20:39.512509] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:08.709 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:08.709 Zero copy mechanism will not be used. 00:21:08.709 [2024-11-27 14:20:39.512733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85369 ] 00:21:08.969 [2024-11-27 14:20:39.688256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.969 [2024-11-27 14:20:39.811606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.229 [2024-11-27 14:20:40.024166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:09.229 [2024-11-27 14:20:40.024282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.490 BaseBdev1_malloc 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.490 [2024-11-27 14:20:40.432084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:09.490 [2024-11-27 14:20:40.432192] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.490 [2024-11-27 14:20:40.432219] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:09.490 [2024-11-27 14:20:40.432232] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.490 [2024-11-27 14:20:40.434571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.490 [2024-11-27 14:20:40.434639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:09.490 BaseBdev1 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.490 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.752 BaseBdev2_malloc 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.752 [2024-11-27 14:20:40.489828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:09.752 [2024-11-27 14:20:40.489907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.752 [2024-11-27 14:20:40.489933] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:09.752 [2024-11-27 14:20:40.489946] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.752 [2024-11-27 14:20:40.492364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.752 [2024-11-27 14:20:40.492473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:09.752 BaseBdev2 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.752 BaseBdev3_malloc 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.752 [2024-11-27 14:20:40.558053] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:09.752 [2024-11-27 14:20:40.558141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.752 [2024-11-27 14:20:40.558168] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:09.752 [2024-11-27 14:20:40.558179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.752 [2024-11-27 14:20:40.560459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.752 [2024-11-27 14:20:40.560560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:09.752 BaseBdev3 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.752 BaseBdev4_malloc 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.752 [2024-11-27 14:20:40.617942] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:09.752 [2024-11-27 14:20:40.618022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.752 [2024-11-27 14:20:40.618049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:09.752 [2024-11-27 14:20:40.618061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.752 [2024-11-27 14:20:40.620550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.752 [2024-11-27 14:20:40.620599] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:09.752 BaseBdev4 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.752 spare_malloc 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.752 spare_delay 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.752 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.752 [2024-11-27 14:20:40.687818] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:09.752 [2024-11-27 14:20:40.687897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.752 [2024-11-27 14:20:40.687918] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:09.752 [2024-11-27 14:20:40.687929] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.752 [2024-11-27 14:20:40.690320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.752 [2024-11-27 14:20:40.690405] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:09.752 spare 00:21:09.753 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.753 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:09.753 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.753 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.753 [2024-11-27 14:20:40.699848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.753 [2024-11-27 14:20:40.701943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:09.753 [2024-11-27 14:20:40.702008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.753 [2024-11-27 14:20:40.702061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:09.753 [2024-11-27 14:20:40.702274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:09.753 [2024-11-27 14:20:40.702306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:10.013 [2024-11-27 14:20:40.702626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:10.013 [2024-11-27 14:20:40.710851] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:10.013 [2024-11-27 14:20:40.710914] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:10.013 [2024-11-27 14:20:40.711243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.013 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.013 "name": "raid_bdev1", 00:21:10.013 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:10.013 "strip_size_kb": 64, 00:21:10.013 "state": "online", 00:21:10.013 "raid_level": "raid5f", 00:21:10.013 "superblock": true, 00:21:10.013 "num_base_bdevs": 4, 00:21:10.013 "num_base_bdevs_discovered": 4, 00:21:10.013 "num_base_bdevs_operational": 4, 00:21:10.013 "base_bdevs_list": [ 00:21:10.013 { 00:21:10.013 "name": "BaseBdev1", 00:21:10.013 "uuid": "44b1b71a-e81a-520c-9444-b8a57df5f84c", 00:21:10.013 "is_configured": true, 00:21:10.013 "data_offset": 2048, 00:21:10.013 "data_size": 63488 00:21:10.013 }, 00:21:10.013 { 00:21:10.013 "name": "BaseBdev2", 00:21:10.013 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:10.013 "is_configured": true, 00:21:10.013 "data_offset": 2048, 00:21:10.013 "data_size": 63488 00:21:10.013 }, 00:21:10.013 { 00:21:10.013 "name": "BaseBdev3", 00:21:10.014 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:10.014 "is_configured": true, 00:21:10.014 "data_offset": 2048, 00:21:10.014 "data_size": 63488 00:21:10.014 }, 00:21:10.014 { 00:21:10.014 "name": "BaseBdev4", 00:21:10.014 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:10.014 "is_configured": true, 00:21:10.014 "data_offset": 2048, 00:21:10.014 "data_size": 63488 00:21:10.014 } 00:21:10.014 ] 00:21:10.014 }' 00:21:10.014 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.014 14:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.273 [2024-11-27 14:20:41.120369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:10.273 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:10.533 [2024-11-27 14:20:41.399742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:10.533 /dev/nbd0 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:10.533 1+0 records in 00:21:10.533 1+0 records out 00:21:10.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593534 s, 6.9 MB/s 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:21:10.533 14:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:21:11.102 496+0 records in 00:21:11.102 496+0 records out 00:21:11.102 97517568 bytes (98 MB, 93 MiB) copied, 0.534961 s, 182 MB/s 00:21:11.102 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:11.102 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:11.102 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:11.102 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:11.102 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:11.102 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:11.102 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:11.363 [2024-11-27 14:20:42.235174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.363 [2024-11-27 14:20:42.270577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.363 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.622 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.622 "name": "raid_bdev1", 00:21:11.622 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:11.622 "strip_size_kb": 64, 00:21:11.622 "state": "online", 00:21:11.622 "raid_level": "raid5f", 00:21:11.622 "superblock": true, 00:21:11.622 "num_base_bdevs": 4, 00:21:11.622 "num_base_bdevs_discovered": 3, 00:21:11.622 "num_base_bdevs_operational": 3, 00:21:11.622 "base_bdevs_list": [ 00:21:11.622 { 00:21:11.622 "name": null, 00:21:11.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.622 "is_configured": false, 00:21:11.622 "data_offset": 0, 00:21:11.622 "data_size": 63488 00:21:11.622 }, 00:21:11.622 { 00:21:11.622 "name": "BaseBdev2", 00:21:11.622 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:11.622 "is_configured": true, 00:21:11.622 "data_offset": 2048, 00:21:11.622 "data_size": 63488 00:21:11.622 }, 00:21:11.622 { 00:21:11.622 "name": "BaseBdev3", 00:21:11.622 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:11.622 "is_configured": true, 00:21:11.622 "data_offset": 2048, 00:21:11.622 "data_size": 63488 00:21:11.622 }, 00:21:11.622 { 00:21:11.622 "name": "BaseBdev4", 00:21:11.622 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:11.622 "is_configured": true, 00:21:11.622 "data_offset": 2048, 00:21:11.622 "data_size": 63488 00:21:11.622 } 00:21:11.622 ] 00:21:11.622 }' 00:21:11.622 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.622 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.881 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:11.881 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.881 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.881 [2024-11-27 14:20:42.729805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:11.881 [2024-11-27 14:20:42.746837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:21:11.881 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.881 14:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:11.881 [2024-11-27 14:20:42.758598] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:12.832 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.832 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.832 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:12.832 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:12.832 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.832 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.832 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.832 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.832 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.104 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.104 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.104 "name": "raid_bdev1", 00:21:13.104 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:13.104 "strip_size_kb": 64, 00:21:13.104 "state": "online", 00:21:13.104 "raid_level": "raid5f", 00:21:13.104 "superblock": true, 00:21:13.104 "num_base_bdevs": 4, 00:21:13.104 "num_base_bdevs_discovered": 4, 00:21:13.104 "num_base_bdevs_operational": 4, 00:21:13.104 "process": { 00:21:13.104 "type": "rebuild", 00:21:13.104 "target": "spare", 00:21:13.104 "progress": { 00:21:13.104 "blocks": 17280, 00:21:13.104 "percent": 9 00:21:13.104 } 00:21:13.104 }, 00:21:13.104 "base_bdevs_list": [ 00:21:13.104 { 00:21:13.104 "name": "spare", 00:21:13.104 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:13.104 "is_configured": true, 00:21:13.104 "data_offset": 2048, 00:21:13.104 "data_size": 63488 00:21:13.104 }, 00:21:13.104 { 00:21:13.104 "name": "BaseBdev2", 00:21:13.104 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:13.104 "is_configured": true, 00:21:13.104 "data_offset": 2048, 00:21:13.104 "data_size": 63488 00:21:13.104 }, 00:21:13.104 { 00:21:13.104 "name": "BaseBdev3", 00:21:13.104 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:13.104 "is_configured": true, 00:21:13.104 "data_offset": 2048, 00:21:13.104 "data_size": 63488 00:21:13.104 }, 00:21:13.104 { 00:21:13.104 "name": "BaseBdev4", 00:21:13.104 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:13.104 "is_configured": true, 00:21:13.104 "data_offset": 2048, 00:21:13.104 "data_size": 63488 00:21:13.104 } 00:21:13.104 ] 00:21:13.104 }' 00:21:13.104 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.104 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.104 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.104 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.104 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:13.104 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.104 14:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.104 [2024-11-27 14:20:43.913974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:13.104 [2024-11-27 14:20:43.968849] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:13.104 [2024-11-27 14:20:43.969080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.104 [2024-11-27 14:20:43.969159] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:13.104 [2024-11-27 14:20:43.969197] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.104 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.364 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.364 "name": "raid_bdev1", 00:21:13.364 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:13.364 "strip_size_kb": 64, 00:21:13.364 "state": "online", 00:21:13.364 "raid_level": "raid5f", 00:21:13.364 "superblock": true, 00:21:13.364 "num_base_bdevs": 4, 00:21:13.364 "num_base_bdevs_discovered": 3, 00:21:13.364 "num_base_bdevs_operational": 3, 00:21:13.364 "base_bdevs_list": [ 00:21:13.364 { 00:21:13.364 "name": null, 00:21:13.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.364 "is_configured": false, 00:21:13.364 "data_offset": 0, 00:21:13.364 "data_size": 63488 00:21:13.364 }, 00:21:13.364 { 00:21:13.364 "name": "BaseBdev2", 00:21:13.364 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:13.364 "is_configured": true, 00:21:13.364 "data_offset": 2048, 00:21:13.364 "data_size": 63488 00:21:13.364 }, 00:21:13.364 { 00:21:13.364 "name": "BaseBdev3", 00:21:13.364 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:13.364 "is_configured": true, 00:21:13.364 "data_offset": 2048, 00:21:13.364 "data_size": 63488 00:21:13.364 }, 00:21:13.364 { 00:21:13.364 "name": "BaseBdev4", 00:21:13.364 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:13.364 "is_configured": true, 00:21:13.364 "data_offset": 2048, 00:21:13.364 "data_size": 63488 00:21:13.364 } 00:21:13.364 ] 00:21:13.365 }' 00:21:13.365 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.365 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.625 "name": "raid_bdev1", 00:21:13.625 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:13.625 "strip_size_kb": 64, 00:21:13.625 "state": "online", 00:21:13.625 "raid_level": "raid5f", 00:21:13.625 "superblock": true, 00:21:13.625 "num_base_bdevs": 4, 00:21:13.625 "num_base_bdevs_discovered": 3, 00:21:13.625 "num_base_bdevs_operational": 3, 00:21:13.625 "base_bdevs_list": [ 00:21:13.625 { 00:21:13.625 "name": null, 00:21:13.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.625 "is_configured": false, 00:21:13.625 "data_offset": 0, 00:21:13.625 "data_size": 63488 00:21:13.625 }, 00:21:13.625 { 00:21:13.625 "name": "BaseBdev2", 00:21:13.625 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:13.625 "is_configured": true, 00:21:13.625 "data_offset": 2048, 00:21:13.625 "data_size": 63488 00:21:13.625 }, 00:21:13.625 { 00:21:13.625 "name": "BaseBdev3", 00:21:13.625 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:13.625 "is_configured": true, 00:21:13.625 "data_offset": 2048, 00:21:13.625 "data_size": 63488 00:21:13.625 }, 00:21:13.625 { 00:21:13.625 "name": "BaseBdev4", 00:21:13.625 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:13.625 "is_configured": true, 00:21:13.625 "data_offset": 2048, 00:21:13.625 "data_size": 63488 00:21:13.625 } 00:21:13.625 ] 00:21:13.625 }' 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:13.625 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.884 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:13.884 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:13.884 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.884 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.884 [2024-11-27 14:20:44.622326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:13.884 [2024-11-27 14:20:44.641165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:21:13.884 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.884 14:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:13.884 [2024-11-27 14:20:44.652776] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:14.820 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.820 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.820 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:14.820 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:14.821 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.821 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.821 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.821 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.821 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.821 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.821 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.821 "name": "raid_bdev1", 00:21:14.821 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:14.821 "strip_size_kb": 64, 00:21:14.821 "state": "online", 00:21:14.821 "raid_level": "raid5f", 00:21:14.821 "superblock": true, 00:21:14.821 "num_base_bdevs": 4, 00:21:14.821 "num_base_bdevs_discovered": 4, 00:21:14.821 "num_base_bdevs_operational": 4, 00:21:14.821 "process": { 00:21:14.821 "type": "rebuild", 00:21:14.821 "target": "spare", 00:21:14.821 "progress": { 00:21:14.821 "blocks": 17280, 00:21:14.821 "percent": 9 00:21:14.821 } 00:21:14.821 }, 00:21:14.821 "base_bdevs_list": [ 00:21:14.821 { 00:21:14.821 "name": "spare", 00:21:14.821 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:14.821 "is_configured": true, 00:21:14.821 "data_offset": 2048, 00:21:14.821 "data_size": 63488 00:21:14.821 }, 00:21:14.821 { 00:21:14.821 "name": "BaseBdev2", 00:21:14.821 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:14.821 "is_configured": true, 00:21:14.821 "data_offset": 2048, 00:21:14.821 "data_size": 63488 00:21:14.821 }, 00:21:14.821 { 00:21:14.821 "name": "BaseBdev3", 00:21:14.821 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:14.821 "is_configured": true, 00:21:14.821 "data_offset": 2048, 00:21:14.821 "data_size": 63488 00:21:14.821 }, 00:21:14.821 { 00:21:14.821 "name": "BaseBdev4", 00:21:14.821 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:14.821 "is_configured": true, 00:21:14.821 "data_offset": 2048, 00:21:14.821 "data_size": 63488 00:21:14.821 } 00:21:14.821 ] 00:21:14.821 }' 00:21:14.821 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.821 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.821 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:15.081 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=653 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.081 "name": "raid_bdev1", 00:21:15.081 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:15.081 "strip_size_kb": 64, 00:21:15.081 "state": "online", 00:21:15.081 "raid_level": "raid5f", 00:21:15.081 "superblock": true, 00:21:15.081 "num_base_bdevs": 4, 00:21:15.081 "num_base_bdevs_discovered": 4, 00:21:15.081 "num_base_bdevs_operational": 4, 00:21:15.081 "process": { 00:21:15.081 "type": "rebuild", 00:21:15.081 "target": "spare", 00:21:15.081 "progress": { 00:21:15.081 "blocks": 21120, 00:21:15.081 "percent": 11 00:21:15.081 } 00:21:15.081 }, 00:21:15.081 "base_bdevs_list": [ 00:21:15.081 { 00:21:15.081 "name": "spare", 00:21:15.081 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:15.081 "is_configured": true, 00:21:15.081 "data_offset": 2048, 00:21:15.081 "data_size": 63488 00:21:15.081 }, 00:21:15.081 { 00:21:15.081 "name": "BaseBdev2", 00:21:15.081 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:15.081 "is_configured": true, 00:21:15.081 "data_offset": 2048, 00:21:15.081 "data_size": 63488 00:21:15.081 }, 00:21:15.081 { 00:21:15.081 "name": "BaseBdev3", 00:21:15.081 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:15.081 "is_configured": true, 00:21:15.081 "data_offset": 2048, 00:21:15.081 "data_size": 63488 00:21:15.081 }, 00:21:15.081 { 00:21:15.081 "name": "BaseBdev4", 00:21:15.081 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:15.081 "is_configured": true, 00:21:15.081 "data_offset": 2048, 00:21:15.081 "data_size": 63488 00:21:15.081 } 00:21:15.081 ] 00:21:15.081 }' 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.081 14:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:16.019 14:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:16.019 14:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.019 14:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.019 14:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:16.019 14:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:16.019 14:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.278 14:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.278 14:20:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.278 14:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.278 14:20:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.278 14:20:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.278 14:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.278 "name": "raid_bdev1", 00:21:16.278 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:16.278 "strip_size_kb": 64, 00:21:16.278 "state": "online", 00:21:16.278 "raid_level": "raid5f", 00:21:16.278 "superblock": true, 00:21:16.278 "num_base_bdevs": 4, 00:21:16.278 "num_base_bdevs_discovered": 4, 00:21:16.278 "num_base_bdevs_operational": 4, 00:21:16.278 "process": { 00:21:16.278 "type": "rebuild", 00:21:16.278 "target": "spare", 00:21:16.278 "progress": { 00:21:16.278 "blocks": 44160, 00:21:16.278 "percent": 23 00:21:16.278 } 00:21:16.278 }, 00:21:16.278 "base_bdevs_list": [ 00:21:16.278 { 00:21:16.278 "name": "spare", 00:21:16.278 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:16.278 "is_configured": true, 00:21:16.278 "data_offset": 2048, 00:21:16.278 "data_size": 63488 00:21:16.278 }, 00:21:16.278 { 00:21:16.278 "name": "BaseBdev2", 00:21:16.278 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:16.278 "is_configured": true, 00:21:16.278 "data_offset": 2048, 00:21:16.278 "data_size": 63488 00:21:16.278 }, 00:21:16.278 { 00:21:16.278 "name": "BaseBdev3", 00:21:16.278 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:16.278 "is_configured": true, 00:21:16.278 "data_offset": 2048, 00:21:16.278 "data_size": 63488 00:21:16.278 }, 00:21:16.278 { 00:21:16.278 "name": "BaseBdev4", 00:21:16.278 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:16.278 "is_configured": true, 00:21:16.278 "data_offset": 2048, 00:21:16.278 "data_size": 63488 00:21:16.278 } 00:21:16.278 ] 00:21:16.278 }' 00:21:16.278 14:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.278 14:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.278 14:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.278 14:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.278 14:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:17.217 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:17.217 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.217 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.217 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.217 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.217 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.217 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.217 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.217 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.217 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.217 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.478 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.478 "name": "raid_bdev1", 00:21:17.478 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:17.478 "strip_size_kb": 64, 00:21:17.478 "state": "online", 00:21:17.478 "raid_level": "raid5f", 00:21:17.478 "superblock": true, 00:21:17.478 "num_base_bdevs": 4, 00:21:17.478 "num_base_bdevs_discovered": 4, 00:21:17.478 "num_base_bdevs_operational": 4, 00:21:17.478 "process": { 00:21:17.478 "type": "rebuild", 00:21:17.478 "target": "spare", 00:21:17.478 "progress": { 00:21:17.478 "blocks": 65280, 00:21:17.478 "percent": 34 00:21:17.478 } 00:21:17.478 }, 00:21:17.478 "base_bdevs_list": [ 00:21:17.478 { 00:21:17.478 "name": "spare", 00:21:17.478 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:17.478 "is_configured": true, 00:21:17.478 "data_offset": 2048, 00:21:17.478 "data_size": 63488 00:21:17.478 }, 00:21:17.478 { 00:21:17.478 "name": "BaseBdev2", 00:21:17.478 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:17.478 "is_configured": true, 00:21:17.478 "data_offset": 2048, 00:21:17.478 "data_size": 63488 00:21:17.478 }, 00:21:17.478 { 00:21:17.478 "name": "BaseBdev3", 00:21:17.478 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:17.478 "is_configured": true, 00:21:17.478 "data_offset": 2048, 00:21:17.478 "data_size": 63488 00:21:17.478 }, 00:21:17.478 { 00:21:17.478 "name": "BaseBdev4", 00:21:17.478 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:17.478 "is_configured": true, 00:21:17.478 "data_offset": 2048, 00:21:17.478 "data_size": 63488 00:21:17.478 } 00:21:17.478 ] 00:21:17.478 }' 00:21:17.478 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.478 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.478 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.478 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.478 14:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.417 "name": "raid_bdev1", 00:21:18.417 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:18.417 "strip_size_kb": 64, 00:21:18.417 "state": "online", 00:21:18.417 "raid_level": "raid5f", 00:21:18.417 "superblock": true, 00:21:18.417 "num_base_bdevs": 4, 00:21:18.417 "num_base_bdevs_discovered": 4, 00:21:18.417 "num_base_bdevs_operational": 4, 00:21:18.417 "process": { 00:21:18.417 "type": "rebuild", 00:21:18.417 "target": "spare", 00:21:18.417 "progress": { 00:21:18.417 "blocks": 88320, 00:21:18.417 "percent": 46 00:21:18.417 } 00:21:18.417 }, 00:21:18.417 "base_bdevs_list": [ 00:21:18.417 { 00:21:18.417 "name": "spare", 00:21:18.417 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:18.417 "is_configured": true, 00:21:18.417 "data_offset": 2048, 00:21:18.417 "data_size": 63488 00:21:18.417 }, 00:21:18.417 { 00:21:18.417 "name": "BaseBdev2", 00:21:18.417 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:18.417 "is_configured": true, 00:21:18.417 "data_offset": 2048, 00:21:18.417 "data_size": 63488 00:21:18.417 }, 00:21:18.417 { 00:21:18.417 "name": "BaseBdev3", 00:21:18.417 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:18.417 "is_configured": true, 00:21:18.417 "data_offset": 2048, 00:21:18.417 "data_size": 63488 00:21:18.417 }, 00:21:18.417 { 00:21:18.417 "name": "BaseBdev4", 00:21:18.417 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:18.417 "is_configured": true, 00:21:18.417 "data_offset": 2048, 00:21:18.417 "data_size": 63488 00:21:18.417 } 00:21:18.417 ] 00:21:18.417 }' 00:21:18.417 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.677 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.677 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.677 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.677 14:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:19.614 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:19.614 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.614 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.614 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.614 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.614 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.614 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.615 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.615 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.615 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.615 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.615 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.615 "name": "raid_bdev1", 00:21:19.615 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:19.615 "strip_size_kb": 64, 00:21:19.615 "state": "online", 00:21:19.615 "raid_level": "raid5f", 00:21:19.615 "superblock": true, 00:21:19.615 "num_base_bdevs": 4, 00:21:19.615 "num_base_bdevs_discovered": 4, 00:21:19.615 "num_base_bdevs_operational": 4, 00:21:19.615 "process": { 00:21:19.615 "type": "rebuild", 00:21:19.615 "target": "spare", 00:21:19.615 "progress": { 00:21:19.615 "blocks": 109440, 00:21:19.615 "percent": 57 00:21:19.615 } 00:21:19.615 }, 00:21:19.615 "base_bdevs_list": [ 00:21:19.615 { 00:21:19.615 "name": "spare", 00:21:19.615 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:19.615 "is_configured": true, 00:21:19.615 "data_offset": 2048, 00:21:19.615 "data_size": 63488 00:21:19.615 }, 00:21:19.615 { 00:21:19.615 "name": "BaseBdev2", 00:21:19.615 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:19.615 "is_configured": true, 00:21:19.615 "data_offset": 2048, 00:21:19.615 "data_size": 63488 00:21:19.615 }, 00:21:19.615 { 00:21:19.615 "name": "BaseBdev3", 00:21:19.615 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:19.615 "is_configured": true, 00:21:19.615 "data_offset": 2048, 00:21:19.615 "data_size": 63488 00:21:19.615 }, 00:21:19.615 { 00:21:19.615 "name": "BaseBdev4", 00:21:19.615 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:19.615 "is_configured": true, 00:21:19.615 "data_offset": 2048, 00:21:19.615 "data_size": 63488 00:21:19.615 } 00:21:19.615 ] 00:21:19.615 }' 00:21:19.615 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.615 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.615 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.879 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.879 14:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.833 "name": "raid_bdev1", 00:21:20.833 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:20.833 "strip_size_kb": 64, 00:21:20.833 "state": "online", 00:21:20.833 "raid_level": "raid5f", 00:21:20.833 "superblock": true, 00:21:20.833 "num_base_bdevs": 4, 00:21:20.833 "num_base_bdevs_discovered": 4, 00:21:20.833 "num_base_bdevs_operational": 4, 00:21:20.833 "process": { 00:21:20.833 "type": "rebuild", 00:21:20.833 "target": "spare", 00:21:20.833 "progress": { 00:21:20.833 "blocks": 132480, 00:21:20.833 "percent": 69 00:21:20.833 } 00:21:20.833 }, 00:21:20.833 "base_bdevs_list": [ 00:21:20.833 { 00:21:20.833 "name": "spare", 00:21:20.833 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:20.833 "is_configured": true, 00:21:20.833 "data_offset": 2048, 00:21:20.833 "data_size": 63488 00:21:20.833 }, 00:21:20.833 { 00:21:20.833 "name": "BaseBdev2", 00:21:20.833 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:20.833 "is_configured": true, 00:21:20.833 "data_offset": 2048, 00:21:20.833 "data_size": 63488 00:21:20.833 }, 00:21:20.833 { 00:21:20.833 "name": "BaseBdev3", 00:21:20.833 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:20.833 "is_configured": true, 00:21:20.833 "data_offset": 2048, 00:21:20.833 "data_size": 63488 00:21:20.833 }, 00:21:20.833 { 00:21:20.833 "name": "BaseBdev4", 00:21:20.833 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:20.833 "is_configured": true, 00:21:20.833 "data_offset": 2048, 00:21:20.833 "data_size": 63488 00:21:20.833 } 00:21:20.833 ] 00:21:20.833 }' 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:20.833 14:20:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.210 "name": "raid_bdev1", 00:21:22.210 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:22.210 "strip_size_kb": 64, 00:21:22.210 "state": "online", 00:21:22.210 "raid_level": "raid5f", 00:21:22.210 "superblock": true, 00:21:22.210 "num_base_bdevs": 4, 00:21:22.210 "num_base_bdevs_discovered": 4, 00:21:22.210 "num_base_bdevs_operational": 4, 00:21:22.210 "process": { 00:21:22.210 "type": "rebuild", 00:21:22.210 "target": "spare", 00:21:22.210 "progress": { 00:21:22.210 "blocks": 153600, 00:21:22.210 "percent": 80 00:21:22.210 } 00:21:22.210 }, 00:21:22.210 "base_bdevs_list": [ 00:21:22.210 { 00:21:22.210 "name": "spare", 00:21:22.210 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:22.210 "is_configured": true, 00:21:22.210 "data_offset": 2048, 00:21:22.210 "data_size": 63488 00:21:22.210 }, 00:21:22.210 { 00:21:22.210 "name": "BaseBdev2", 00:21:22.210 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:22.210 "is_configured": true, 00:21:22.210 "data_offset": 2048, 00:21:22.210 "data_size": 63488 00:21:22.210 }, 00:21:22.210 { 00:21:22.210 "name": "BaseBdev3", 00:21:22.210 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:22.210 "is_configured": true, 00:21:22.210 "data_offset": 2048, 00:21:22.210 "data_size": 63488 00:21:22.210 }, 00:21:22.210 { 00:21:22.210 "name": "BaseBdev4", 00:21:22.210 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:22.210 "is_configured": true, 00:21:22.210 "data_offset": 2048, 00:21:22.210 "data_size": 63488 00:21:22.210 } 00:21:22.210 ] 00:21:22.210 }' 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.210 14:20:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.145 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.145 "name": "raid_bdev1", 00:21:23.145 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:23.145 "strip_size_kb": 64, 00:21:23.145 "state": "online", 00:21:23.145 "raid_level": "raid5f", 00:21:23.145 "superblock": true, 00:21:23.145 "num_base_bdevs": 4, 00:21:23.145 "num_base_bdevs_discovered": 4, 00:21:23.145 "num_base_bdevs_operational": 4, 00:21:23.145 "process": { 00:21:23.145 "type": "rebuild", 00:21:23.145 "target": "spare", 00:21:23.145 "progress": { 00:21:23.145 "blocks": 174720, 00:21:23.145 "percent": 91 00:21:23.145 } 00:21:23.146 }, 00:21:23.146 "base_bdevs_list": [ 00:21:23.146 { 00:21:23.146 "name": "spare", 00:21:23.146 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:23.146 "is_configured": true, 00:21:23.146 "data_offset": 2048, 00:21:23.146 "data_size": 63488 00:21:23.146 }, 00:21:23.146 { 00:21:23.146 "name": "BaseBdev2", 00:21:23.146 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:23.146 "is_configured": true, 00:21:23.146 "data_offset": 2048, 00:21:23.146 "data_size": 63488 00:21:23.146 }, 00:21:23.146 { 00:21:23.146 "name": "BaseBdev3", 00:21:23.146 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:23.146 "is_configured": true, 00:21:23.146 "data_offset": 2048, 00:21:23.146 "data_size": 63488 00:21:23.146 }, 00:21:23.146 { 00:21:23.146 "name": "BaseBdev4", 00:21:23.146 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:23.146 "is_configured": true, 00:21:23.146 "data_offset": 2048, 00:21:23.146 "data_size": 63488 00:21:23.146 } 00:21:23.146 ] 00:21:23.146 }' 00:21:23.146 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.146 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.146 14:20:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.146 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.146 14:20:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:24.081 [2024-11-27 14:20:54.734985] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:24.081 [2024-11-27 14:20:54.735223] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:24.081 [2024-11-27 14:20:54.735435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.341 "name": "raid_bdev1", 00:21:24.341 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:24.341 "strip_size_kb": 64, 00:21:24.341 "state": "online", 00:21:24.341 "raid_level": "raid5f", 00:21:24.341 "superblock": true, 00:21:24.341 "num_base_bdevs": 4, 00:21:24.341 "num_base_bdevs_discovered": 4, 00:21:24.341 "num_base_bdevs_operational": 4, 00:21:24.341 "base_bdevs_list": [ 00:21:24.341 { 00:21:24.341 "name": "spare", 00:21:24.341 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:24.341 "is_configured": true, 00:21:24.341 "data_offset": 2048, 00:21:24.341 "data_size": 63488 00:21:24.341 }, 00:21:24.341 { 00:21:24.341 "name": "BaseBdev2", 00:21:24.341 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:24.341 "is_configured": true, 00:21:24.341 "data_offset": 2048, 00:21:24.341 "data_size": 63488 00:21:24.341 }, 00:21:24.341 { 00:21:24.341 "name": "BaseBdev3", 00:21:24.341 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:24.341 "is_configured": true, 00:21:24.341 "data_offset": 2048, 00:21:24.341 "data_size": 63488 00:21:24.341 }, 00:21:24.341 { 00:21:24.341 "name": "BaseBdev4", 00:21:24.341 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:24.341 "is_configured": true, 00:21:24.341 "data_offset": 2048, 00:21:24.341 "data_size": 63488 00:21:24.341 } 00:21:24.341 ] 00:21:24.341 }' 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.341 "name": "raid_bdev1", 00:21:24.341 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:24.341 "strip_size_kb": 64, 00:21:24.341 "state": "online", 00:21:24.341 "raid_level": "raid5f", 00:21:24.341 "superblock": true, 00:21:24.341 "num_base_bdevs": 4, 00:21:24.341 "num_base_bdevs_discovered": 4, 00:21:24.341 "num_base_bdevs_operational": 4, 00:21:24.341 "base_bdevs_list": [ 00:21:24.341 { 00:21:24.341 "name": "spare", 00:21:24.341 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:24.341 "is_configured": true, 00:21:24.341 "data_offset": 2048, 00:21:24.341 "data_size": 63488 00:21:24.341 }, 00:21:24.341 { 00:21:24.341 "name": "BaseBdev2", 00:21:24.341 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:24.341 "is_configured": true, 00:21:24.341 "data_offset": 2048, 00:21:24.341 "data_size": 63488 00:21:24.341 }, 00:21:24.341 { 00:21:24.341 "name": "BaseBdev3", 00:21:24.341 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:24.341 "is_configured": true, 00:21:24.341 "data_offset": 2048, 00:21:24.341 "data_size": 63488 00:21:24.341 }, 00:21:24.341 { 00:21:24.341 "name": "BaseBdev4", 00:21:24.341 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:24.341 "is_configured": true, 00:21:24.341 "data_offset": 2048, 00:21:24.341 "data_size": 63488 00:21:24.341 } 00:21:24.341 ] 00:21:24.341 }' 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:24.341 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.600 "name": "raid_bdev1", 00:21:24.600 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:24.600 "strip_size_kb": 64, 00:21:24.600 "state": "online", 00:21:24.600 "raid_level": "raid5f", 00:21:24.600 "superblock": true, 00:21:24.600 "num_base_bdevs": 4, 00:21:24.600 "num_base_bdevs_discovered": 4, 00:21:24.600 "num_base_bdevs_operational": 4, 00:21:24.600 "base_bdevs_list": [ 00:21:24.600 { 00:21:24.600 "name": "spare", 00:21:24.600 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:24.600 "is_configured": true, 00:21:24.600 "data_offset": 2048, 00:21:24.600 "data_size": 63488 00:21:24.600 }, 00:21:24.600 { 00:21:24.600 "name": "BaseBdev2", 00:21:24.600 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:24.600 "is_configured": true, 00:21:24.600 "data_offset": 2048, 00:21:24.600 "data_size": 63488 00:21:24.600 }, 00:21:24.600 { 00:21:24.600 "name": "BaseBdev3", 00:21:24.600 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:24.600 "is_configured": true, 00:21:24.600 "data_offset": 2048, 00:21:24.600 "data_size": 63488 00:21:24.600 }, 00:21:24.600 { 00:21:24.600 "name": "BaseBdev4", 00:21:24.600 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:24.600 "is_configured": true, 00:21:24.600 "data_offset": 2048, 00:21:24.600 "data_size": 63488 00:21:24.600 } 00:21:24.600 ] 00:21:24.600 }' 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.600 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.858 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:24.859 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.859 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.859 [2024-11-27 14:20:55.800052] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:24.859 [2024-11-27 14:20:55.800177] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:24.859 [2024-11-27 14:20:55.800335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:24.859 [2024-11-27 14:20:55.800488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:24.859 [2024-11-27 14:20:55.800577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:24.859 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.859 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.859 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.859 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.859 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:25.118 14:20:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:25.376 /dev/nbd0 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:25.376 1+0 records in 00:21:25.376 1+0 records out 00:21:25.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312102 s, 13.1 MB/s 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:25.376 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:25.636 /dev/nbd1 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:25.636 1+0 records in 00:21:25.636 1+0 records out 00:21:25.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512039 s, 8.0 MB/s 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:25.636 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:25.896 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:25.896 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:25.896 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:25.896 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:25.896 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:25.896 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.896 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:26.156 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:26.156 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:26.156 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:26.156 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.156 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.156 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:26.156 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:26.156 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.156 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:26.156 14:20:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.416 [2024-11-27 14:20:57.149992] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:26.416 [2024-11-27 14:20:57.150066] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.416 [2024-11-27 14:20:57.150093] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:26.416 [2024-11-27 14:20:57.150105] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.416 [2024-11-27 14:20:57.152937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.416 spare 00:21:26.416 [2024-11-27 14:20:57.153046] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:26.416 [2024-11-27 14:20:57.153222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:26.416 [2024-11-27 14:20:57.153297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.416 [2024-11-27 14:20:57.153470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.416 [2024-11-27 14:20:57.153582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:26.416 [2024-11-27 14:20:57.153694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.416 [2024-11-27 14:20:57.253626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:26.416 [2024-11-27 14:20:57.253795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:26.416 [2024-11-27 14:20:57.254241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:21:26.416 [2024-11-27 14:20:57.263333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:26.416 [2024-11-27 14:20:57.263408] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:26.416 [2024-11-27 14:20:57.263733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.416 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.416 "name": "raid_bdev1", 00:21:26.416 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:26.416 "strip_size_kb": 64, 00:21:26.416 "state": "online", 00:21:26.416 "raid_level": "raid5f", 00:21:26.416 "superblock": true, 00:21:26.416 "num_base_bdevs": 4, 00:21:26.416 "num_base_bdevs_discovered": 4, 00:21:26.416 "num_base_bdevs_operational": 4, 00:21:26.416 "base_bdevs_list": [ 00:21:26.416 { 00:21:26.416 "name": "spare", 00:21:26.416 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:26.416 "is_configured": true, 00:21:26.416 "data_offset": 2048, 00:21:26.416 "data_size": 63488 00:21:26.416 }, 00:21:26.416 { 00:21:26.416 "name": "BaseBdev2", 00:21:26.416 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:26.416 "is_configured": true, 00:21:26.416 "data_offset": 2048, 00:21:26.416 "data_size": 63488 00:21:26.417 }, 00:21:26.417 { 00:21:26.417 "name": "BaseBdev3", 00:21:26.417 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:26.417 "is_configured": true, 00:21:26.417 "data_offset": 2048, 00:21:26.417 "data_size": 63488 00:21:26.417 }, 00:21:26.417 { 00:21:26.417 "name": "BaseBdev4", 00:21:26.417 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:26.417 "is_configured": true, 00:21:26.417 "data_offset": 2048, 00:21:26.417 "data_size": 63488 00:21:26.417 } 00:21:26.417 ] 00:21:26.417 }' 00:21:26.417 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.417 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.985 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:26.985 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:26.986 "name": "raid_bdev1", 00:21:26.986 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:26.986 "strip_size_kb": 64, 00:21:26.986 "state": "online", 00:21:26.986 "raid_level": "raid5f", 00:21:26.986 "superblock": true, 00:21:26.986 "num_base_bdevs": 4, 00:21:26.986 "num_base_bdevs_discovered": 4, 00:21:26.986 "num_base_bdevs_operational": 4, 00:21:26.986 "base_bdevs_list": [ 00:21:26.986 { 00:21:26.986 "name": "spare", 00:21:26.986 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:26.986 "is_configured": true, 00:21:26.986 "data_offset": 2048, 00:21:26.986 "data_size": 63488 00:21:26.986 }, 00:21:26.986 { 00:21:26.986 "name": "BaseBdev2", 00:21:26.986 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:26.986 "is_configured": true, 00:21:26.986 "data_offset": 2048, 00:21:26.986 "data_size": 63488 00:21:26.986 }, 00:21:26.986 { 00:21:26.986 "name": "BaseBdev3", 00:21:26.986 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:26.986 "is_configured": true, 00:21:26.986 "data_offset": 2048, 00:21:26.986 "data_size": 63488 00:21:26.986 }, 00:21:26.986 { 00:21:26.986 "name": "BaseBdev4", 00:21:26.986 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:26.986 "is_configured": true, 00:21:26.986 "data_offset": 2048, 00:21:26.986 "data_size": 63488 00:21:26.986 } 00:21:26.986 ] 00:21:26.986 }' 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.986 [2024-11-27 14:20:57.849245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.986 "name": "raid_bdev1", 00:21:26.986 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:26.986 "strip_size_kb": 64, 00:21:26.986 "state": "online", 00:21:26.986 "raid_level": "raid5f", 00:21:26.986 "superblock": true, 00:21:26.986 "num_base_bdevs": 4, 00:21:26.986 "num_base_bdevs_discovered": 3, 00:21:26.986 "num_base_bdevs_operational": 3, 00:21:26.986 "base_bdevs_list": [ 00:21:26.986 { 00:21:26.986 "name": null, 00:21:26.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.986 "is_configured": false, 00:21:26.986 "data_offset": 0, 00:21:26.986 "data_size": 63488 00:21:26.986 }, 00:21:26.986 { 00:21:26.986 "name": "BaseBdev2", 00:21:26.986 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:26.986 "is_configured": true, 00:21:26.986 "data_offset": 2048, 00:21:26.986 "data_size": 63488 00:21:26.986 }, 00:21:26.986 { 00:21:26.986 "name": "BaseBdev3", 00:21:26.986 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:26.986 "is_configured": true, 00:21:26.986 "data_offset": 2048, 00:21:26.986 "data_size": 63488 00:21:26.986 }, 00:21:26.986 { 00:21:26.986 "name": "BaseBdev4", 00:21:26.986 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:26.986 "is_configured": true, 00:21:26.986 "data_offset": 2048, 00:21:26.986 "data_size": 63488 00:21:26.986 } 00:21:26.986 ] 00:21:26.986 }' 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.986 14:20:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.555 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:27.555 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.555 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.555 [2024-11-27 14:20:58.272517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:27.555 [2024-11-27 14:20:58.272742] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:27.555 [2024-11-27 14:20:58.272767] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:27.555 [2024-11-27 14:20:58.272815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:27.555 [2024-11-27 14:20:58.290632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:21:27.555 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.555 14:20:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:27.555 [2024-11-27 14:20:58.301704] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:28.492 "name": "raid_bdev1", 00:21:28.492 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:28.492 "strip_size_kb": 64, 00:21:28.492 "state": "online", 00:21:28.492 "raid_level": "raid5f", 00:21:28.492 "superblock": true, 00:21:28.492 "num_base_bdevs": 4, 00:21:28.492 "num_base_bdevs_discovered": 4, 00:21:28.492 "num_base_bdevs_operational": 4, 00:21:28.492 "process": { 00:21:28.492 "type": "rebuild", 00:21:28.492 "target": "spare", 00:21:28.492 "progress": { 00:21:28.492 "blocks": 17280, 00:21:28.492 "percent": 9 00:21:28.492 } 00:21:28.492 }, 00:21:28.492 "base_bdevs_list": [ 00:21:28.492 { 00:21:28.492 "name": "spare", 00:21:28.492 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:28.492 "is_configured": true, 00:21:28.492 "data_offset": 2048, 00:21:28.492 "data_size": 63488 00:21:28.492 }, 00:21:28.492 { 00:21:28.492 "name": "BaseBdev2", 00:21:28.492 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:28.492 "is_configured": true, 00:21:28.492 "data_offset": 2048, 00:21:28.492 "data_size": 63488 00:21:28.492 }, 00:21:28.492 { 00:21:28.492 "name": "BaseBdev3", 00:21:28.492 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:28.492 "is_configured": true, 00:21:28.492 "data_offset": 2048, 00:21:28.492 "data_size": 63488 00:21:28.492 }, 00:21:28.492 { 00:21:28.492 "name": "BaseBdev4", 00:21:28.492 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:28.492 "is_configured": true, 00:21:28.492 "data_offset": 2048, 00:21:28.492 "data_size": 63488 00:21:28.492 } 00:21:28.492 ] 00:21:28.492 }' 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.492 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.492 [2024-11-27 14:20:59.445197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:28.750 [2024-11-27 14:20:59.511716] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:28.751 [2024-11-27 14:20:59.511956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.751 [2024-11-27 14:20:59.512010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:28.751 [2024-11-27 14:20:59.512048] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.751 "name": "raid_bdev1", 00:21:28.751 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:28.751 "strip_size_kb": 64, 00:21:28.751 "state": "online", 00:21:28.751 "raid_level": "raid5f", 00:21:28.751 "superblock": true, 00:21:28.751 "num_base_bdevs": 4, 00:21:28.751 "num_base_bdevs_discovered": 3, 00:21:28.751 "num_base_bdevs_operational": 3, 00:21:28.751 "base_bdevs_list": [ 00:21:28.751 { 00:21:28.751 "name": null, 00:21:28.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.751 "is_configured": false, 00:21:28.751 "data_offset": 0, 00:21:28.751 "data_size": 63488 00:21:28.751 }, 00:21:28.751 { 00:21:28.751 "name": "BaseBdev2", 00:21:28.751 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:28.751 "is_configured": true, 00:21:28.751 "data_offset": 2048, 00:21:28.751 "data_size": 63488 00:21:28.751 }, 00:21:28.751 { 00:21:28.751 "name": "BaseBdev3", 00:21:28.751 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:28.751 "is_configured": true, 00:21:28.751 "data_offset": 2048, 00:21:28.751 "data_size": 63488 00:21:28.751 }, 00:21:28.751 { 00:21:28.751 "name": "BaseBdev4", 00:21:28.751 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:28.751 "is_configured": true, 00:21:28.751 "data_offset": 2048, 00:21:28.751 "data_size": 63488 00:21:28.751 } 00:21:28.751 ] 00:21:28.751 }' 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.751 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.317 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:29.317 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.317 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.317 [2024-11-27 14:20:59.974276] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:29.317 [2024-11-27 14:20:59.974366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.318 [2024-11-27 14:20:59.974398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:29.318 [2024-11-27 14:20:59.974413] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.318 [2024-11-27 14:20:59.975002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.318 [2024-11-27 14:20:59.975037] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:29.318 [2024-11-27 14:20:59.975189] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:29.318 [2024-11-27 14:20:59.975208] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:29.318 [2024-11-27 14:20:59.975220] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:29.318 [2024-11-27 14:20:59.975252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:29.318 [2024-11-27 14:20:59.992633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:21:29.318 spare 00:21:29.318 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.318 14:20:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:29.318 [2024-11-27 14:21:00.003914] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:30.254 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.254 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:30.254 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:30.254 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:30.254 14:21:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:30.254 "name": "raid_bdev1", 00:21:30.254 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:30.254 "strip_size_kb": 64, 00:21:30.254 "state": "online", 00:21:30.254 "raid_level": "raid5f", 00:21:30.254 "superblock": true, 00:21:30.254 "num_base_bdevs": 4, 00:21:30.254 "num_base_bdevs_discovered": 4, 00:21:30.254 "num_base_bdevs_operational": 4, 00:21:30.254 "process": { 00:21:30.254 "type": "rebuild", 00:21:30.254 "target": "spare", 00:21:30.254 "progress": { 00:21:30.254 "blocks": 17280, 00:21:30.254 "percent": 9 00:21:30.254 } 00:21:30.254 }, 00:21:30.254 "base_bdevs_list": [ 00:21:30.254 { 00:21:30.254 "name": "spare", 00:21:30.254 "uuid": "9351eb7b-501e-5970-81c7-d8d41e81adc3", 00:21:30.254 "is_configured": true, 00:21:30.254 "data_offset": 2048, 00:21:30.254 "data_size": 63488 00:21:30.254 }, 00:21:30.254 { 00:21:30.254 "name": "BaseBdev2", 00:21:30.254 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:30.254 "is_configured": true, 00:21:30.254 "data_offset": 2048, 00:21:30.254 "data_size": 63488 00:21:30.254 }, 00:21:30.254 { 00:21:30.254 "name": "BaseBdev3", 00:21:30.254 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:30.254 "is_configured": true, 00:21:30.254 "data_offset": 2048, 00:21:30.254 "data_size": 63488 00:21:30.254 }, 00:21:30.254 { 00:21:30.254 "name": "BaseBdev4", 00:21:30.254 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:30.254 "is_configured": true, 00:21:30.254 "data_offset": 2048, 00:21:30.254 "data_size": 63488 00:21:30.254 } 00:21:30.254 ] 00:21:30.254 }' 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.254 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.254 [2024-11-27 14:21:01.123826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:30.515 [2024-11-27 14:21:01.213705] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:30.515 [2024-11-27 14:21:01.213798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.515 [2024-11-27 14:21:01.213821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:30.515 [2024-11-27 14:21:01.213829] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:30.515 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.515 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:30.515 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.515 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.515 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.515 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.515 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:30.515 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.515 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.516 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.516 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.516 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.516 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.516 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.516 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.516 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.516 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.516 "name": "raid_bdev1", 00:21:30.516 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:30.516 "strip_size_kb": 64, 00:21:30.516 "state": "online", 00:21:30.516 "raid_level": "raid5f", 00:21:30.516 "superblock": true, 00:21:30.516 "num_base_bdevs": 4, 00:21:30.516 "num_base_bdevs_discovered": 3, 00:21:30.516 "num_base_bdevs_operational": 3, 00:21:30.516 "base_bdevs_list": [ 00:21:30.516 { 00:21:30.516 "name": null, 00:21:30.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.516 "is_configured": false, 00:21:30.516 "data_offset": 0, 00:21:30.516 "data_size": 63488 00:21:30.516 }, 00:21:30.516 { 00:21:30.516 "name": "BaseBdev2", 00:21:30.516 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:30.516 "is_configured": true, 00:21:30.516 "data_offset": 2048, 00:21:30.516 "data_size": 63488 00:21:30.516 }, 00:21:30.516 { 00:21:30.516 "name": "BaseBdev3", 00:21:30.516 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:30.516 "is_configured": true, 00:21:30.516 "data_offset": 2048, 00:21:30.516 "data_size": 63488 00:21:30.516 }, 00:21:30.516 { 00:21:30.516 "name": "BaseBdev4", 00:21:30.516 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:30.516 "is_configured": true, 00:21:30.516 "data_offset": 2048, 00:21:30.516 "data_size": 63488 00:21:30.516 } 00:21:30.516 ] 00:21:30.516 }' 00:21:30.516 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.516 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.774 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:30.774 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:30.774 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:30.774 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:30.774 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:30.774 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.774 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.774 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.774 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.032 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.032 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.032 "name": "raid_bdev1", 00:21:31.032 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:31.032 "strip_size_kb": 64, 00:21:31.032 "state": "online", 00:21:31.032 "raid_level": "raid5f", 00:21:31.032 "superblock": true, 00:21:31.032 "num_base_bdevs": 4, 00:21:31.032 "num_base_bdevs_discovered": 3, 00:21:31.032 "num_base_bdevs_operational": 3, 00:21:31.032 "base_bdevs_list": [ 00:21:31.032 { 00:21:31.032 "name": null, 00:21:31.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.032 "is_configured": false, 00:21:31.032 "data_offset": 0, 00:21:31.032 "data_size": 63488 00:21:31.032 }, 00:21:31.032 { 00:21:31.032 "name": "BaseBdev2", 00:21:31.032 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:31.032 "is_configured": true, 00:21:31.032 "data_offset": 2048, 00:21:31.032 "data_size": 63488 00:21:31.032 }, 00:21:31.032 { 00:21:31.032 "name": "BaseBdev3", 00:21:31.032 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:31.032 "is_configured": true, 00:21:31.032 "data_offset": 2048, 00:21:31.032 "data_size": 63488 00:21:31.032 }, 00:21:31.032 { 00:21:31.032 "name": "BaseBdev4", 00:21:31.032 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:31.032 "is_configured": true, 00:21:31.032 "data_offset": 2048, 00:21:31.032 "data_size": 63488 00:21:31.032 } 00:21:31.032 ] 00:21:31.032 }' 00:21:31.032 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.032 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:31.032 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:31.032 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:31.032 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:31.033 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.033 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.033 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.033 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:31.033 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.033 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.033 [2024-11-27 14:21:01.884343] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:31.033 [2024-11-27 14:21:01.884430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.033 [2024-11-27 14:21:01.884460] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:31.033 [2024-11-27 14:21:01.884472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.033 [2024-11-27 14:21:01.885059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.033 [2024-11-27 14:21:01.885096] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:31.033 [2024-11-27 14:21:01.885230] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:31.033 [2024-11-27 14:21:01.885250] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:31.033 [2024-11-27 14:21:01.885266] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:31.033 [2024-11-27 14:21:01.885279] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:31.033 BaseBdev1 00:21:31.033 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.033 14:21:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.970 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.230 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.230 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.230 "name": "raid_bdev1", 00:21:32.230 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:32.230 "strip_size_kb": 64, 00:21:32.230 "state": "online", 00:21:32.230 "raid_level": "raid5f", 00:21:32.230 "superblock": true, 00:21:32.230 "num_base_bdevs": 4, 00:21:32.230 "num_base_bdevs_discovered": 3, 00:21:32.230 "num_base_bdevs_operational": 3, 00:21:32.230 "base_bdevs_list": [ 00:21:32.230 { 00:21:32.230 "name": null, 00:21:32.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.230 "is_configured": false, 00:21:32.230 "data_offset": 0, 00:21:32.230 "data_size": 63488 00:21:32.230 }, 00:21:32.230 { 00:21:32.230 "name": "BaseBdev2", 00:21:32.230 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:32.230 "is_configured": true, 00:21:32.230 "data_offset": 2048, 00:21:32.230 "data_size": 63488 00:21:32.230 }, 00:21:32.230 { 00:21:32.230 "name": "BaseBdev3", 00:21:32.230 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:32.230 "is_configured": true, 00:21:32.230 "data_offset": 2048, 00:21:32.230 "data_size": 63488 00:21:32.230 }, 00:21:32.230 { 00:21:32.230 "name": "BaseBdev4", 00:21:32.230 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:32.230 "is_configured": true, 00:21:32.230 "data_offset": 2048, 00:21:32.230 "data_size": 63488 00:21:32.230 } 00:21:32.230 ] 00:21:32.230 }' 00:21:32.230 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.230 14:21:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.490 "name": "raid_bdev1", 00:21:32.490 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:32.490 "strip_size_kb": 64, 00:21:32.490 "state": "online", 00:21:32.490 "raid_level": "raid5f", 00:21:32.490 "superblock": true, 00:21:32.490 "num_base_bdevs": 4, 00:21:32.490 "num_base_bdevs_discovered": 3, 00:21:32.490 "num_base_bdevs_operational": 3, 00:21:32.490 "base_bdevs_list": [ 00:21:32.490 { 00:21:32.490 "name": null, 00:21:32.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.490 "is_configured": false, 00:21:32.490 "data_offset": 0, 00:21:32.490 "data_size": 63488 00:21:32.490 }, 00:21:32.490 { 00:21:32.490 "name": "BaseBdev2", 00:21:32.490 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:32.490 "is_configured": true, 00:21:32.490 "data_offset": 2048, 00:21:32.490 "data_size": 63488 00:21:32.490 }, 00:21:32.490 { 00:21:32.490 "name": "BaseBdev3", 00:21:32.490 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:32.490 "is_configured": true, 00:21:32.490 "data_offset": 2048, 00:21:32.490 "data_size": 63488 00:21:32.490 }, 00:21:32.490 { 00:21:32.490 "name": "BaseBdev4", 00:21:32.490 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:32.490 "is_configured": true, 00:21:32.490 "data_offset": 2048, 00:21:32.490 "data_size": 63488 00:21:32.490 } 00:21:32.490 ] 00:21:32.490 }' 00:21:32.490 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.750 [2024-11-27 14:21:03.525709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:32.750 [2024-11-27 14:21:03.525970] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:32.750 [2024-11-27 14:21:03.525994] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:32.750 request: 00:21:32.750 { 00:21:32.750 "base_bdev": "BaseBdev1", 00:21:32.750 "raid_bdev": "raid_bdev1", 00:21:32.750 "method": "bdev_raid_add_base_bdev", 00:21:32.750 "req_id": 1 00:21:32.750 } 00:21:32.750 Got JSON-RPC error response 00:21:32.750 response: 00:21:32.750 { 00:21:32.750 "code": -22, 00:21:32.750 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:32.750 } 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.750 14:21:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.690 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:33.690 "name": "raid_bdev1", 00:21:33.690 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:33.690 "strip_size_kb": 64, 00:21:33.690 "state": "online", 00:21:33.690 "raid_level": "raid5f", 00:21:33.690 "superblock": true, 00:21:33.690 "num_base_bdevs": 4, 00:21:33.690 "num_base_bdevs_discovered": 3, 00:21:33.690 "num_base_bdevs_operational": 3, 00:21:33.690 "base_bdevs_list": [ 00:21:33.690 { 00:21:33.690 "name": null, 00:21:33.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.690 "is_configured": false, 00:21:33.690 "data_offset": 0, 00:21:33.690 "data_size": 63488 00:21:33.690 }, 00:21:33.690 { 00:21:33.690 "name": "BaseBdev2", 00:21:33.690 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:33.690 "is_configured": true, 00:21:33.690 "data_offset": 2048, 00:21:33.690 "data_size": 63488 00:21:33.690 }, 00:21:33.690 { 00:21:33.690 "name": "BaseBdev3", 00:21:33.690 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:33.690 "is_configured": true, 00:21:33.690 "data_offset": 2048, 00:21:33.690 "data_size": 63488 00:21:33.690 }, 00:21:33.690 { 00:21:33.690 "name": "BaseBdev4", 00:21:33.690 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:33.690 "is_configured": true, 00:21:33.690 "data_offset": 2048, 00:21:33.691 "data_size": 63488 00:21:33.691 } 00:21:33.691 ] 00:21:33.691 }' 00:21:33.691 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:33.691 14:21:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:34.261 "name": "raid_bdev1", 00:21:34.261 "uuid": "f996c555-5a4c-4127-8a33-6b63fbe4c6c6", 00:21:34.261 "strip_size_kb": 64, 00:21:34.261 "state": "online", 00:21:34.261 "raid_level": "raid5f", 00:21:34.261 "superblock": true, 00:21:34.261 "num_base_bdevs": 4, 00:21:34.261 "num_base_bdevs_discovered": 3, 00:21:34.261 "num_base_bdevs_operational": 3, 00:21:34.261 "base_bdevs_list": [ 00:21:34.261 { 00:21:34.261 "name": null, 00:21:34.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.261 "is_configured": false, 00:21:34.261 "data_offset": 0, 00:21:34.261 "data_size": 63488 00:21:34.261 }, 00:21:34.261 { 00:21:34.261 "name": "BaseBdev2", 00:21:34.261 "uuid": "379b4cf5-752a-5c2c-a4a6-54f64cd9f9c8", 00:21:34.261 "is_configured": true, 00:21:34.261 "data_offset": 2048, 00:21:34.261 "data_size": 63488 00:21:34.261 }, 00:21:34.261 { 00:21:34.261 "name": "BaseBdev3", 00:21:34.261 "uuid": "4da7087c-9283-5c46-8e57-0d315ca75f5c", 00:21:34.261 "is_configured": true, 00:21:34.261 "data_offset": 2048, 00:21:34.261 "data_size": 63488 00:21:34.261 }, 00:21:34.261 { 00:21:34.261 "name": "BaseBdev4", 00:21:34.261 "uuid": "bb63389d-4fcd-593c-8f67-bb6b783fd5f0", 00:21:34.261 "is_configured": true, 00:21:34.261 "data_offset": 2048, 00:21:34.261 "data_size": 63488 00:21:34.261 } 00:21:34.261 ] 00:21:34.261 }' 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85369 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85369 ']' 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85369 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.261 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85369 00:21:34.522 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.522 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.522 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85369' 00:21:34.522 killing process with pid 85369 00:21:34.522 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85369 00:21:34.522 Received shutdown signal, test time was about 60.000000 seconds 00:21:34.522 00:21:34.522 Latency(us) 00:21:34.522 [2024-11-27T14:21:05.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.522 [2024-11-27T14:21:05.478Z] =================================================================================================================== 00:21:34.522 [2024-11-27T14:21:05.478Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:34.522 [2024-11-27 14:21:05.222260] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:34.522 14:21:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85369 00:21:34.522 [2024-11-27 14:21:05.222443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.522 [2024-11-27 14:21:05.222545] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.522 [2024-11-27 14:21:05.222568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:35.094 [2024-11-27 14:21:05.761682] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:36.475 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:36.475 00:21:36.475 real 0m27.597s 00:21:36.475 user 0m34.803s 00:21:36.475 sys 0m2.983s 00:21:36.475 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:36.475 ************************************ 00:21:36.475 END TEST raid5f_rebuild_test_sb 00:21:36.475 ************************************ 00:21:36.475 14:21:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.475 14:21:07 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:21:36.475 14:21:07 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:21:36.475 14:21:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:36.475 14:21:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:36.475 14:21:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:36.475 ************************************ 00:21:36.475 START TEST raid_state_function_test_sb_4k 00:21:36.475 ************************************ 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:36.475 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:36.476 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:36.476 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86180 00:21:36.476 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:36.476 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86180' 00:21:36.476 Process raid pid: 86180 00:21:36.476 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86180 00:21:36.476 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86180 ']' 00:21:36.476 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.476 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.476 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.476 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.476 14:21:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.476 [2024-11-27 14:21:07.183896] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:36.476 [2024-11-27 14:21:07.184161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.476 [2024-11-27 14:21:07.364594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.736 [2024-11-27 14:21:07.508829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.996 [2024-11-27 14:21:07.751801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:36.996 [2024-11-27 14:21:07.751985] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.256 [2024-11-27 14:21:08.070515] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.256 [2024-11-27 14:21:08.070673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.256 [2024-11-27 14:21:08.070718] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.256 [2024-11-27 14:21:08.070744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.256 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.257 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.257 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.257 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.257 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.257 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.257 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.257 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.257 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.257 "name": "Existed_Raid", 00:21:37.257 "uuid": "48b0270c-147a-4daa-807b-8a0c48fc011b", 00:21:37.257 "strip_size_kb": 0, 00:21:37.257 "state": "configuring", 00:21:37.257 "raid_level": "raid1", 00:21:37.257 "superblock": true, 00:21:37.257 "num_base_bdevs": 2, 00:21:37.257 "num_base_bdevs_discovered": 0, 00:21:37.257 "num_base_bdevs_operational": 2, 00:21:37.257 "base_bdevs_list": [ 00:21:37.257 { 00:21:37.257 "name": "BaseBdev1", 00:21:37.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.257 "is_configured": false, 00:21:37.257 "data_offset": 0, 00:21:37.257 "data_size": 0 00:21:37.257 }, 00:21:37.257 { 00:21:37.257 "name": "BaseBdev2", 00:21:37.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.257 "is_configured": false, 00:21:37.257 "data_offset": 0, 00:21:37.257 "data_size": 0 00:21:37.257 } 00:21:37.257 ] 00:21:37.257 }' 00:21:37.257 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.257 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.831 [2024-11-27 14:21:08.505755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:37.831 [2024-11-27 14:21:08.505913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.831 [2024-11-27 14:21:08.517741] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.831 [2024-11-27 14:21:08.517807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.831 [2024-11-27 14:21:08.517820] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.831 [2024-11-27 14:21:08.517835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.831 [2024-11-27 14:21:08.573644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.831 BaseBdev1 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.831 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.831 [ 00:21:37.831 { 00:21:37.831 "name": "BaseBdev1", 00:21:37.831 "aliases": [ 00:21:37.832 "76b1d36d-7ee8-4263-8aad-2c1a93cee039" 00:21:37.832 ], 00:21:37.832 "product_name": "Malloc disk", 00:21:37.832 "block_size": 4096, 00:21:37.832 "num_blocks": 8192, 00:21:37.832 "uuid": "76b1d36d-7ee8-4263-8aad-2c1a93cee039", 00:21:37.832 "assigned_rate_limits": { 00:21:37.832 "rw_ios_per_sec": 0, 00:21:37.832 "rw_mbytes_per_sec": 0, 00:21:37.832 "r_mbytes_per_sec": 0, 00:21:37.832 "w_mbytes_per_sec": 0 00:21:37.832 }, 00:21:37.832 "claimed": true, 00:21:37.832 "claim_type": "exclusive_write", 00:21:37.832 "zoned": false, 00:21:37.832 "supported_io_types": { 00:21:37.832 "read": true, 00:21:37.832 "write": true, 00:21:37.832 "unmap": true, 00:21:37.832 "flush": true, 00:21:37.832 "reset": true, 00:21:37.832 "nvme_admin": false, 00:21:37.832 "nvme_io": false, 00:21:37.832 "nvme_io_md": false, 00:21:37.832 "write_zeroes": true, 00:21:37.832 "zcopy": true, 00:21:37.832 "get_zone_info": false, 00:21:37.832 "zone_management": false, 00:21:37.832 "zone_append": false, 00:21:37.832 "compare": false, 00:21:37.832 "compare_and_write": false, 00:21:37.832 "abort": true, 00:21:37.832 "seek_hole": false, 00:21:37.832 "seek_data": false, 00:21:37.832 "copy": true, 00:21:37.832 "nvme_iov_md": false 00:21:37.832 }, 00:21:37.832 "memory_domains": [ 00:21:37.832 { 00:21:37.832 "dma_device_id": "system", 00:21:37.832 "dma_device_type": 1 00:21:37.832 }, 00:21:37.832 { 00:21:37.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.832 "dma_device_type": 2 00:21:37.832 } 00:21:37.832 ], 00:21:37.832 "driver_specific": {} 00:21:37.832 } 00:21:37.832 ] 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.832 "name": "Existed_Raid", 00:21:37.832 "uuid": "4fb7c454-1d24-47d5-8bdb-311413fa3cf5", 00:21:37.832 "strip_size_kb": 0, 00:21:37.832 "state": "configuring", 00:21:37.832 "raid_level": "raid1", 00:21:37.832 "superblock": true, 00:21:37.832 "num_base_bdevs": 2, 00:21:37.832 "num_base_bdevs_discovered": 1, 00:21:37.832 "num_base_bdevs_operational": 2, 00:21:37.832 "base_bdevs_list": [ 00:21:37.832 { 00:21:37.832 "name": "BaseBdev1", 00:21:37.832 "uuid": "76b1d36d-7ee8-4263-8aad-2c1a93cee039", 00:21:37.832 "is_configured": true, 00:21:37.832 "data_offset": 256, 00:21:37.832 "data_size": 7936 00:21:37.832 }, 00:21:37.832 { 00:21:37.832 "name": "BaseBdev2", 00:21:37.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.832 "is_configured": false, 00:21:37.832 "data_offset": 0, 00:21:37.832 "data_size": 0 00:21:37.832 } 00:21:37.832 ] 00:21:37.832 }' 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.832 14:21:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.399 [2024-11-27 14:21:09.076847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:38.399 [2024-11-27 14:21:09.077029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.399 [2024-11-27 14:21:09.084870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.399 [2024-11-27 14:21:09.087097] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.399 [2024-11-27 14:21:09.087189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.399 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.399 "name": "Existed_Raid", 00:21:38.399 "uuid": "fc7a887a-be4b-4235-87d9-f64f1b757a83", 00:21:38.399 "strip_size_kb": 0, 00:21:38.399 "state": "configuring", 00:21:38.399 "raid_level": "raid1", 00:21:38.399 "superblock": true, 00:21:38.399 "num_base_bdevs": 2, 00:21:38.400 "num_base_bdevs_discovered": 1, 00:21:38.400 "num_base_bdevs_operational": 2, 00:21:38.400 "base_bdevs_list": [ 00:21:38.400 { 00:21:38.400 "name": "BaseBdev1", 00:21:38.400 "uuid": "76b1d36d-7ee8-4263-8aad-2c1a93cee039", 00:21:38.400 "is_configured": true, 00:21:38.400 "data_offset": 256, 00:21:38.400 "data_size": 7936 00:21:38.400 }, 00:21:38.400 { 00:21:38.400 "name": "BaseBdev2", 00:21:38.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.400 "is_configured": false, 00:21:38.400 "data_offset": 0, 00:21:38.400 "data_size": 0 00:21:38.400 } 00:21:38.400 ] 00:21:38.400 }' 00:21:38.400 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.400 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.659 [2024-11-27 14:21:09.565356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:38.659 [2024-11-27 14:21:09.565692] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:38.659 [2024-11-27 14:21:09.565711] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:38.659 [2024-11-27 14:21:09.566051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:38.659 BaseBdev2 00:21:38.659 [2024-11-27 14:21:09.566294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:38.659 [2024-11-27 14:21:09.566312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:38.659 [2024-11-27 14:21:09.566491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.659 [ 00:21:38.659 { 00:21:38.659 "name": "BaseBdev2", 00:21:38.659 "aliases": [ 00:21:38.659 "829c7aa6-36aa-4a0c-b535-7ff22c767258" 00:21:38.659 ], 00:21:38.659 "product_name": "Malloc disk", 00:21:38.659 "block_size": 4096, 00:21:38.659 "num_blocks": 8192, 00:21:38.659 "uuid": "829c7aa6-36aa-4a0c-b535-7ff22c767258", 00:21:38.659 "assigned_rate_limits": { 00:21:38.659 "rw_ios_per_sec": 0, 00:21:38.659 "rw_mbytes_per_sec": 0, 00:21:38.659 "r_mbytes_per_sec": 0, 00:21:38.659 "w_mbytes_per_sec": 0 00:21:38.659 }, 00:21:38.659 "claimed": true, 00:21:38.659 "claim_type": "exclusive_write", 00:21:38.659 "zoned": false, 00:21:38.659 "supported_io_types": { 00:21:38.659 "read": true, 00:21:38.659 "write": true, 00:21:38.659 "unmap": true, 00:21:38.659 "flush": true, 00:21:38.659 "reset": true, 00:21:38.659 "nvme_admin": false, 00:21:38.659 "nvme_io": false, 00:21:38.659 "nvme_io_md": false, 00:21:38.659 "write_zeroes": true, 00:21:38.659 "zcopy": true, 00:21:38.659 "get_zone_info": false, 00:21:38.659 "zone_management": false, 00:21:38.659 "zone_append": false, 00:21:38.659 "compare": false, 00:21:38.659 "compare_and_write": false, 00:21:38.659 "abort": true, 00:21:38.659 "seek_hole": false, 00:21:38.659 "seek_data": false, 00:21:38.659 "copy": true, 00:21:38.659 "nvme_iov_md": false 00:21:38.659 }, 00:21:38.659 "memory_domains": [ 00:21:38.659 { 00:21:38.659 "dma_device_id": "system", 00:21:38.659 "dma_device_type": 1 00:21:38.659 }, 00:21:38.659 { 00:21:38.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.659 "dma_device_type": 2 00:21:38.659 } 00:21:38.659 ], 00:21:38.659 "driver_specific": {} 00:21:38.659 } 00:21:38.659 ] 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.659 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.918 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.918 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:38.918 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.918 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.918 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.918 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.918 "name": "Existed_Raid", 00:21:38.919 "uuid": "fc7a887a-be4b-4235-87d9-f64f1b757a83", 00:21:38.919 "strip_size_kb": 0, 00:21:38.919 "state": "online", 00:21:38.919 "raid_level": "raid1", 00:21:38.919 "superblock": true, 00:21:38.919 "num_base_bdevs": 2, 00:21:38.919 "num_base_bdevs_discovered": 2, 00:21:38.919 "num_base_bdevs_operational": 2, 00:21:38.919 "base_bdevs_list": [ 00:21:38.919 { 00:21:38.919 "name": "BaseBdev1", 00:21:38.919 "uuid": "76b1d36d-7ee8-4263-8aad-2c1a93cee039", 00:21:38.919 "is_configured": true, 00:21:38.919 "data_offset": 256, 00:21:38.919 "data_size": 7936 00:21:38.919 }, 00:21:38.919 { 00:21:38.919 "name": "BaseBdev2", 00:21:38.919 "uuid": "829c7aa6-36aa-4a0c-b535-7ff22c767258", 00:21:38.919 "is_configured": true, 00:21:38.919 "data_offset": 256, 00:21:38.919 "data_size": 7936 00:21:38.919 } 00:21:38.919 ] 00:21:38.919 }' 00:21:38.919 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.919 14:21:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:39.178 [2024-11-27 14:21:10.028922] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:39.178 "name": "Existed_Raid", 00:21:39.178 "aliases": [ 00:21:39.178 "fc7a887a-be4b-4235-87d9-f64f1b757a83" 00:21:39.178 ], 00:21:39.178 "product_name": "Raid Volume", 00:21:39.178 "block_size": 4096, 00:21:39.178 "num_blocks": 7936, 00:21:39.178 "uuid": "fc7a887a-be4b-4235-87d9-f64f1b757a83", 00:21:39.178 "assigned_rate_limits": { 00:21:39.178 "rw_ios_per_sec": 0, 00:21:39.178 "rw_mbytes_per_sec": 0, 00:21:39.178 "r_mbytes_per_sec": 0, 00:21:39.178 "w_mbytes_per_sec": 0 00:21:39.178 }, 00:21:39.178 "claimed": false, 00:21:39.178 "zoned": false, 00:21:39.178 "supported_io_types": { 00:21:39.178 "read": true, 00:21:39.178 "write": true, 00:21:39.178 "unmap": false, 00:21:39.178 "flush": false, 00:21:39.178 "reset": true, 00:21:39.178 "nvme_admin": false, 00:21:39.178 "nvme_io": false, 00:21:39.178 "nvme_io_md": false, 00:21:39.178 "write_zeroes": true, 00:21:39.178 "zcopy": false, 00:21:39.178 "get_zone_info": false, 00:21:39.178 "zone_management": false, 00:21:39.178 "zone_append": false, 00:21:39.178 "compare": false, 00:21:39.178 "compare_and_write": false, 00:21:39.178 "abort": false, 00:21:39.178 "seek_hole": false, 00:21:39.178 "seek_data": false, 00:21:39.178 "copy": false, 00:21:39.178 "nvme_iov_md": false 00:21:39.178 }, 00:21:39.178 "memory_domains": [ 00:21:39.178 { 00:21:39.178 "dma_device_id": "system", 00:21:39.178 "dma_device_type": 1 00:21:39.178 }, 00:21:39.178 { 00:21:39.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.178 "dma_device_type": 2 00:21:39.178 }, 00:21:39.178 { 00:21:39.178 "dma_device_id": "system", 00:21:39.178 "dma_device_type": 1 00:21:39.178 }, 00:21:39.178 { 00:21:39.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.178 "dma_device_type": 2 00:21:39.178 } 00:21:39.178 ], 00:21:39.178 "driver_specific": { 00:21:39.178 "raid": { 00:21:39.178 "uuid": "fc7a887a-be4b-4235-87d9-f64f1b757a83", 00:21:39.178 "strip_size_kb": 0, 00:21:39.178 "state": "online", 00:21:39.178 "raid_level": "raid1", 00:21:39.178 "superblock": true, 00:21:39.178 "num_base_bdevs": 2, 00:21:39.178 "num_base_bdevs_discovered": 2, 00:21:39.178 "num_base_bdevs_operational": 2, 00:21:39.178 "base_bdevs_list": [ 00:21:39.178 { 00:21:39.178 "name": "BaseBdev1", 00:21:39.178 "uuid": "76b1d36d-7ee8-4263-8aad-2c1a93cee039", 00:21:39.178 "is_configured": true, 00:21:39.178 "data_offset": 256, 00:21:39.178 "data_size": 7936 00:21:39.178 }, 00:21:39.178 { 00:21:39.178 "name": "BaseBdev2", 00:21:39.178 "uuid": "829c7aa6-36aa-4a0c-b535-7ff22c767258", 00:21:39.178 "is_configured": true, 00:21:39.178 "data_offset": 256, 00:21:39.178 "data_size": 7936 00:21:39.178 } 00:21:39.178 ] 00:21:39.178 } 00:21:39.178 } 00:21:39.178 }' 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:39.178 BaseBdev2' 00:21:39.178 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.437 [2024-11-27 14:21:10.272332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.437 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.698 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.698 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.698 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.698 "name": "Existed_Raid", 00:21:39.698 "uuid": "fc7a887a-be4b-4235-87d9-f64f1b757a83", 00:21:39.698 "strip_size_kb": 0, 00:21:39.698 "state": "online", 00:21:39.698 "raid_level": "raid1", 00:21:39.698 "superblock": true, 00:21:39.698 "num_base_bdevs": 2, 00:21:39.698 "num_base_bdevs_discovered": 1, 00:21:39.698 "num_base_bdevs_operational": 1, 00:21:39.698 "base_bdevs_list": [ 00:21:39.698 { 00:21:39.698 "name": null, 00:21:39.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.698 "is_configured": false, 00:21:39.698 "data_offset": 0, 00:21:39.698 "data_size": 7936 00:21:39.698 }, 00:21:39.698 { 00:21:39.698 "name": "BaseBdev2", 00:21:39.698 "uuid": "829c7aa6-36aa-4a0c-b535-7ff22c767258", 00:21:39.698 "is_configured": true, 00:21:39.698 "data_offset": 256, 00:21:39.698 "data_size": 7936 00:21:39.698 } 00:21:39.698 ] 00:21:39.698 }' 00:21:39.698 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.698 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.958 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.958 [2024-11-27 14:21:10.840348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:39.958 [2024-11-27 14:21:10.840491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.217 [2024-11-27 14:21:10.945971] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.217 [2024-11-27 14:21:10.946054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.217 [2024-11-27 14:21:10.946067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:40.217 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.217 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:40.217 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:40.217 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.218 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:40.218 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.218 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.218 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.218 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:40.218 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:40.218 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:40.218 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86180 00:21:40.218 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86180 ']' 00:21:40.218 14:21:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86180 00:21:40.218 14:21:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:21:40.218 14:21:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:40.218 14:21:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86180 00:21:40.218 killing process with pid 86180 00:21:40.218 14:21:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:40.218 14:21:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:40.218 14:21:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86180' 00:21:40.218 14:21:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86180 00:21:40.218 [2024-11-27 14:21:11.044248] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:40.218 14:21:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86180 00:21:40.218 [2024-11-27 14:21:11.062445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:41.598 14:21:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:21:41.598 00:21:41.598 real 0m5.150s 00:21:41.598 user 0m7.293s 00:21:41.598 sys 0m0.947s 00:21:41.598 14:21:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.598 14:21:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.598 ************************************ 00:21:41.598 END TEST raid_state_function_test_sb_4k 00:21:41.598 ************************************ 00:21:41.598 14:21:12 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:21:41.598 14:21:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:41.598 14:21:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.598 14:21:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:41.598 ************************************ 00:21:41.598 START TEST raid_superblock_test_4k 00:21:41.598 ************************************ 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86427 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86427 00:21:41.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86427 ']' 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.598 14:21:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.598 [2024-11-27 14:21:12.423489] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:41.599 [2024-11-27 14:21:12.423672] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86427 ] 00:21:41.858 [2024-11-27 14:21:12.602056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.858 [2024-11-27 14:21:12.718153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.117 [2024-11-27 14:21:12.917339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.117 [2024-11-27 14:21:12.917409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.377 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.636 malloc1 00:21:42.636 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.636 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:42.636 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.636 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.636 [2024-11-27 14:21:13.335396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:42.636 [2024-11-27 14:21:13.335481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.637 [2024-11-27 14:21:13.335510] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:42.637 [2024-11-27 14:21:13.335526] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.637 [2024-11-27 14:21:13.338081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.637 [2024-11-27 14:21:13.338218] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:42.637 pt1 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.637 malloc2 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.637 [2024-11-27 14:21:13.406742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:42.637 [2024-11-27 14:21:13.406829] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.637 [2024-11-27 14:21:13.406858] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:42.637 [2024-11-27 14:21:13.406867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.637 [2024-11-27 14:21:13.409209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.637 [2024-11-27 14:21:13.409246] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:42.637 pt2 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.637 [2024-11-27 14:21:13.418770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:42.637 [2024-11-27 14:21:13.420772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:42.637 [2024-11-27 14:21:13.420989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:42.637 [2024-11-27 14:21:13.421009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:42.637 [2024-11-27 14:21:13.421322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:42.637 [2024-11-27 14:21:13.421521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:42.637 [2024-11-27 14:21:13.421546] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:42.637 [2024-11-27 14:21:13.421738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.637 "name": "raid_bdev1", 00:21:42.637 "uuid": "576b6db7-d266-4306-b1dc-5478480dac28", 00:21:42.637 "strip_size_kb": 0, 00:21:42.637 "state": "online", 00:21:42.637 "raid_level": "raid1", 00:21:42.637 "superblock": true, 00:21:42.637 "num_base_bdevs": 2, 00:21:42.637 "num_base_bdevs_discovered": 2, 00:21:42.637 "num_base_bdevs_operational": 2, 00:21:42.637 "base_bdevs_list": [ 00:21:42.637 { 00:21:42.637 "name": "pt1", 00:21:42.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:42.637 "is_configured": true, 00:21:42.637 "data_offset": 256, 00:21:42.637 "data_size": 7936 00:21:42.637 }, 00:21:42.637 { 00:21:42.637 "name": "pt2", 00:21:42.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.637 "is_configured": true, 00:21:42.637 "data_offset": 256, 00:21:42.637 "data_size": 7936 00:21:42.637 } 00:21:42.637 ] 00:21:42.637 }' 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.637 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.206 [2024-11-27 14:21:13.906235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:43.206 "name": "raid_bdev1", 00:21:43.206 "aliases": [ 00:21:43.206 "576b6db7-d266-4306-b1dc-5478480dac28" 00:21:43.206 ], 00:21:43.206 "product_name": "Raid Volume", 00:21:43.206 "block_size": 4096, 00:21:43.206 "num_blocks": 7936, 00:21:43.206 "uuid": "576b6db7-d266-4306-b1dc-5478480dac28", 00:21:43.206 "assigned_rate_limits": { 00:21:43.206 "rw_ios_per_sec": 0, 00:21:43.206 "rw_mbytes_per_sec": 0, 00:21:43.206 "r_mbytes_per_sec": 0, 00:21:43.206 "w_mbytes_per_sec": 0 00:21:43.206 }, 00:21:43.206 "claimed": false, 00:21:43.206 "zoned": false, 00:21:43.206 "supported_io_types": { 00:21:43.206 "read": true, 00:21:43.206 "write": true, 00:21:43.206 "unmap": false, 00:21:43.206 "flush": false, 00:21:43.206 "reset": true, 00:21:43.206 "nvme_admin": false, 00:21:43.206 "nvme_io": false, 00:21:43.206 "nvme_io_md": false, 00:21:43.206 "write_zeroes": true, 00:21:43.206 "zcopy": false, 00:21:43.206 "get_zone_info": false, 00:21:43.206 "zone_management": false, 00:21:43.206 "zone_append": false, 00:21:43.206 "compare": false, 00:21:43.206 "compare_and_write": false, 00:21:43.206 "abort": false, 00:21:43.206 "seek_hole": false, 00:21:43.206 "seek_data": false, 00:21:43.206 "copy": false, 00:21:43.206 "nvme_iov_md": false 00:21:43.206 }, 00:21:43.206 "memory_domains": [ 00:21:43.206 { 00:21:43.206 "dma_device_id": "system", 00:21:43.206 "dma_device_type": 1 00:21:43.206 }, 00:21:43.206 { 00:21:43.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.206 "dma_device_type": 2 00:21:43.206 }, 00:21:43.206 { 00:21:43.206 "dma_device_id": "system", 00:21:43.206 "dma_device_type": 1 00:21:43.206 }, 00:21:43.206 { 00:21:43.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.206 "dma_device_type": 2 00:21:43.206 } 00:21:43.206 ], 00:21:43.206 "driver_specific": { 00:21:43.206 "raid": { 00:21:43.206 "uuid": "576b6db7-d266-4306-b1dc-5478480dac28", 00:21:43.206 "strip_size_kb": 0, 00:21:43.206 "state": "online", 00:21:43.206 "raid_level": "raid1", 00:21:43.206 "superblock": true, 00:21:43.206 "num_base_bdevs": 2, 00:21:43.206 "num_base_bdevs_discovered": 2, 00:21:43.206 "num_base_bdevs_operational": 2, 00:21:43.206 "base_bdevs_list": [ 00:21:43.206 { 00:21:43.206 "name": "pt1", 00:21:43.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.206 "is_configured": true, 00:21:43.206 "data_offset": 256, 00:21:43.206 "data_size": 7936 00:21:43.206 }, 00:21:43.206 { 00:21:43.206 "name": "pt2", 00:21:43.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.206 "is_configured": true, 00:21:43.206 "data_offset": 256, 00:21:43.206 "data_size": 7936 00:21:43.206 } 00:21:43.206 ] 00:21:43.206 } 00:21:43.206 } 00:21:43.206 }' 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:43.206 pt2' 00:21:43.206 14:21:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.206 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:43.206 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.206 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:43.206 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.206 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.206 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.206 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.206 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.207 [2024-11-27 14:21:14.125836] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.207 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=576b6db7-d266-4306-b1dc-5478480dac28 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 576b6db7-d266-4306-b1dc-5478480dac28 ']' 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.467 [2024-11-27 14:21:14.173461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.467 [2024-11-27 14:21:14.173499] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.467 [2024-11-27 14:21:14.173593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.467 [2024-11-27 14:21:14.173656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.467 [2024-11-27 14:21:14.173677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:43.467 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.468 [2024-11-27 14:21:14.301295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:43.468 [2024-11-27 14:21:14.303263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:43.468 [2024-11-27 14:21:14.303338] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:43.468 [2024-11-27 14:21:14.303408] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:43.468 [2024-11-27 14:21:14.303448] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.468 [2024-11-27 14:21:14.303460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:43.468 request: 00:21:43.468 { 00:21:43.468 "name": "raid_bdev1", 00:21:43.468 "raid_level": "raid1", 00:21:43.468 "base_bdevs": [ 00:21:43.468 "malloc1", 00:21:43.468 "malloc2" 00:21:43.468 ], 00:21:43.468 "superblock": false, 00:21:43.468 "method": "bdev_raid_create", 00:21:43.468 "req_id": 1 00:21:43.468 } 00:21:43.468 Got JSON-RPC error response 00:21:43.468 response: 00:21:43.468 { 00:21:43.468 "code": -17, 00:21:43.468 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:43.468 } 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.468 [2024-11-27 14:21:14.373192] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:43.468 [2024-11-27 14:21:14.373271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.468 [2024-11-27 14:21:14.373292] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:43.468 [2024-11-27 14:21:14.373303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.468 [2024-11-27 14:21:14.375701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.468 [2024-11-27 14:21:14.375747] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:43.468 [2024-11-27 14:21:14.375841] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:43.468 [2024-11-27 14:21:14.375932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:43.468 pt1 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.468 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.728 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.728 "name": "raid_bdev1", 00:21:43.728 "uuid": "576b6db7-d266-4306-b1dc-5478480dac28", 00:21:43.728 "strip_size_kb": 0, 00:21:43.728 "state": "configuring", 00:21:43.728 "raid_level": "raid1", 00:21:43.728 "superblock": true, 00:21:43.728 "num_base_bdevs": 2, 00:21:43.728 "num_base_bdevs_discovered": 1, 00:21:43.728 "num_base_bdevs_operational": 2, 00:21:43.728 "base_bdevs_list": [ 00:21:43.728 { 00:21:43.728 "name": "pt1", 00:21:43.728 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.728 "is_configured": true, 00:21:43.728 "data_offset": 256, 00:21:43.728 "data_size": 7936 00:21:43.728 }, 00:21:43.728 { 00:21:43.728 "name": null, 00:21:43.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.728 "is_configured": false, 00:21:43.728 "data_offset": 256, 00:21:43.728 "data_size": 7936 00:21:43.728 } 00:21:43.728 ] 00:21:43.728 }' 00:21:43.728 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.728 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.988 [2024-11-27 14:21:14.820438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:43.988 [2024-11-27 14:21:14.820532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.988 [2024-11-27 14:21:14.820556] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:43.988 [2024-11-27 14:21:14.820569] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.988 [2024-11-27 14:21:14.821113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.988 [2024-11-27 14:21:14.821158] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:43.988 [2024-11-27 14:21:14.821249] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:43.988 [2024-11-27 14:21:14.821287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:43.988 [2024-11-27 14:21:14.821430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:43.988 [2024-11-27 14:21:14.821451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:43.988 [2024-11-27 14:21:14.821723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:43.988 [2024-11-27 14:21:14.821916] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:43.988 [2024-11-27 14:21:14.821932] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:43.988 [2024-11-27 14:21:14.822107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.988 pt2 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.988 "name": "raid_bdev1", 00:21:43.988 "uuid": "576b6db7-d266-4306-b1dc-5478480dac28", 00:21:43.988 "strip_size_kb": 0, 00:21:43.988 "state": "online", 00:21:43.988 "raid_level": "raid1", 00:21:43.988 "superblock": true, 00:21:43.988 "num_base_bdevs": 2, 00:21:43.988 "num_base_bdevs_discovered": 2, 00:21:43.988 "num_base_bdevs_operational": 2, 00:21:43.988 "base_bdevs_list": [ 00:21:43.988 { 00:21:43.988 "name": "pt1", 00:21:43.988 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:43.988 "is_configured": true, 00:21:43.988 "data_offset": 256, 00:21:43.988 "data_size": 7936 00:21:43.988 }, 00:21:43.988 { 00:21:43.988 "name": "pt2", 00:21:43.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.988 "is_configured": true, 00:21:43.988 "data_offset": 256, 00:21:43.988 "data_size": 7936 00:21:43.988 } 00:21:43.988 ] 00:21:43.988 }' 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.988 14:21:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.559 [2024-11-27 14:21:15.295927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:44.559 "name": "raid_bdev1", 00:21:44.559 "aliases": [ 00:21:44.559 "576b6db7-d266-4306-b1dc-5478480dac28" 00:21:44.559 ], 00:21:44.559 "product_name": "Raid Volume", 00:21:44.559 "block_size": 4096, 00:21:44.559 "num_blocks": 7936, 00:21:44.559 "uuid": "576b6db7-d266-4306-b1dc-5478480dac28", 00:21:44.559 "assigned_rate_limits": { 00:21:44.559 "rw_ios_per_sec": 0, 00:21:44.559 "rw_mbytes_per_sec": 0, 00:21:44.559 "r_mbytes_per_sec": 0, 00:21:44.559 "w_mbytes_per_sec": 0 00:21:44.559 }, 00:21:44.559 "claimed": false, 00:21:44.559 "zoned": false, 00:21:44.559 "supported_io_types": { 00:21:44.559 "read": true, 00:21:44.559 "write": true, 00:21:44.559 "unmap": false, 00:21:44.559 "flush": false, 00:21:44.559 "reset": true, 00:21:44.559 "nvme_admin": false, 00:21:44.559 "nvme_io": false, 00:21:44.559 "nvme_io_md": false, 00:21:44.559 "write_zeroes": true, 00:21:44.559 "zcopy": false, 00:21:44.559 "get_zone_info": false, 00:21:44.559 "zone_management": false, 00:21:44.559 "zone_append": false, 00:21:44.559 "compare": false, 00:21:44.559 "compare_and_write": false, 00:21:44.559 "abort": false, 00:21:44.559 "seek_hole": false, 00:21:44.559 "seek_data": false, 00:21:44.559 "copy": false, 00:21:44.559 "nvme_iov_md": false 00:21:44.559 }, 00:21:44.559 "memory_domains": [ 00:21:44.559 { 00:21:44.559 "dma_device_id": "system", 00:21:44.559 "dma_device_type": 1 00:21:44.559 }, 00:21:44.559 { 00:21:44.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.559 "dma_device_type": 2 00:21:44.559 }, 00:21:44.559 { 00:21:44.559 "dma_device_id": "system", 00:21:44.559 "dma_device_type": 1 00:21:44.559 }, 00:21:44.559 { 00:21:44.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.559 "dma_device_type": 2 00:21:44.559 } 00:21:44.559 ], 00:21:44.559 "driver_specific": { 00:21:44.559 "raid": { 00:21:44.559 "uuid": "576b6db7-d266-4306-b1dc-5478480dac28", 00:21:44.559 "strip_size_kb": 0, 00:21:44.559 "state": "online", 00:21:44.559 "raid_level": "raid1", 00:21:44.559 "superblock": true, 00:21:44.559 "num_base_bdevs": 2, 00:21:44.559 "num_base_bdevs_discovered": 2, 00:21:44.559 "num_base_bdevs_operational": 2, 00:21:44.559 "base_bdevs_list": [ 00:21:44.559 { 00:21:44.559 "name": "pt1", 00:21:44.559 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:44.559 "is_configured": true, 00:21:44.559 "data_offset": 256, 00:21:44.559 "data_size": 7936 00:21:44.559 }, 00:21:44.559 { 00:21:44.559 "name": "pt2", 00:21:44.559 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.559 "is_configured": true, 00:21:44.559 "data_offset": 256, 00:21:44.559 "data_size": 7936 00:21:44.559 } 00:21:44.559 ] 00:21:44.559 } 00:21:44.559 } 00:21:44.559 }' 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:44.559 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:44.560 pt2' 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.560 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:44.820 [2024-11-27 14:21:15.523570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 576b6db7-d266-4306-b1dc-5478480dac28 '!=' 576b6db7-d266-4306-b1dc-5478480dac28 ']' 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.820 [2024-11-27 14:21:15.571293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.820 "name": "raid_bdev1", 00:21:44.820 "uuid": "576b6db7-d266-4306-b1dc-5478480dac28", 00:21:44.820 "strip_size_kb": 0, 00:21:44.820 "state": "online", 00:21:44.820 "raid_level": "raid1", 00:21:44.820 "superblock": true, 00:21:44.820 "num_base_bdevs": 2, 00:21:44.820 "num_base_bdevs_discovered": 1, 00:21:44.820 "num_base_bdevs_operational": 1, 00:21:44.820 "base_bdevs_list": [ 00:21:44.820 { 00:21:44.820 "name": null, 00:21:44.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.820 "is_configured": false, 00:21:44.820 "data_offset": 0, 00:21:44.820 "data_size": 7936 00:21:44.820 }, 00:21:44.820 { 00:21:44.820 "name": "pt2", 00:21:44.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.820 "is_configured": true, 00:21:44.820 "data_offset": 256, 00:21:44.820 "data_size": 7936 00:21:44.820 } 00:21:44.820 ] 00:21:44.820 }' 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.820 14:21:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.390 [2024-11-27 14:21:16.066395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:45.390 [2024-11-27 14:21:16.066443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:45.390 [2024-11-27 14:21:16.066530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:45.390 [2024-11-27 14:21:16.066597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:45.390 [2024-11-27 14:21:16.066619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.390 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.390 [2024-11-27 14:21:16.142313] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:45.390 [2024-11-27 14:21:16.142398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.390 [2024-11-27 14:21:16.142417] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:45.390 [2024-11-27 14:21:16.142430] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.390 [2024-11-27 14:21:16.144932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.390 [2024-11-27 14:21:16.144987] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:45.390 [2024-11-27 14:21:16.145118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:45.390 [2024-11-27 14:21:16.145189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:45.390 [2024-11-27 14:21:16.145324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:45.390 [2024-11-27 14:21:16.145347] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:45.390 [2024-11-27 14:21:16.145624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:45.390 [2024-11-27 14:21:16.145854] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:45.391 [2024-11-27 14:21:16.145873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:45.391 [2024-11-27 14:21:16.146045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.391 pt2 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.391 "name": "raid_bdev1", 00:21:45.391 "uuid": "576b6db7-d266-4306-b1dc-5478480dac28", 00:21:45.391 "strip_size_kb": 0, 00:21:45.391 "state": "online", 00:21:45.391 "raid_level": "raid1", 00:21:45.391 "superblock": true, 00:21:45.391 "num_base_bdevs": 2, 00:21:45.391 "num_base_bdevs_discovered": 1, 00:21:45.391 "num_base_bdevs_operational": 1, 00:21:45.391 "base_bdevs_list": [ 00:21:45.391 { 00:21:45.391 "name": null, 00:21:45.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.391 "is_configured": false, 00:21:45.391 "data_offset": 256, 00:21:45.391 "data_size": 7936 00:21:45.391 }, 00:21:45.391 { 00:21:45.391 "name": "pt2", 00:21:45.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.391 "is_configured": true, 00:21:45.391 "data_offset": 256, 00:21:45.391 "data_size": 7936 00:21:45.391 } 00:21:45.391 ] 00:21:45.391 }' 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.391 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.965 [2024-11-27 14:21:16.637389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:45.965 [2024-11-27 14:21:16.637429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:45.965 [2024-11-27 14:21:16.637510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:45.965 [2024-11-27 14:21:16.637565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:45.965 [2024-11-27 14:21:16.637574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.965 [2024-11-27 14:21:16.697325] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:45.965 [2024-11-27 14:21:16.697407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.965 [2024-11-27 14:21:16.697428] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:45.965 [2024-11-27 14:21:16.697438] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.965 [2024-11-27 14:21:16.699802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.965 [2024-11-27 14:21:16.699848] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:45.965 [2024-11-27 14:21:16.699948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:45.965 [2024-11-27 14:21:16.700040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:45.965 [2024-11-27 14:21:16.700238] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:45.965 [2024-11-27 14:21:16.700260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:45.965 [2024-11-27 14:21:16.700279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:45.965 [2024-11-27 14:21:16.700351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:45.965 [2024-11-27 14:21:16.700440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:45.965 [2024-11-27 14:21:16.700449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:45.965 [2024-11-27 14:21:16.700736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:45.965 [2024-11-27 14:21:16.700920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:45.965 [2024-11-27 14:21:16.700934] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:45.965 [2024-11-27 14:21:16.701104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.965 pt1 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:45.965 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:45.966 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.966 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.966 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.966 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.966 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.966 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:45.966 "name": "raid_bdev1", 00:21:45.966 "uuid": "576b6db7-d266-4306-b1dc-5478480dac28", 00:21:45.966 "strip_size_kb": 0, 00:21:45.966 "state": "online", 00:21:45.966 "raid_level": "raid1", 00:21:45.966 "superblock": true, 00:21:45.966 "num_base_bdevs": 2, 00:21:45.966 "num_base_bdevs_discovered": 1, 00:21:45.966 "num_base_bdevs_operational": 1, 00:21:45.966 "base_bdevs_list": [ 00:21:45.966 { 00:21:45.966 "name": null, 00:21:45.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.966 "is_configured": false, 00:21:45.966 "data_offset": 256, 00:21:45.966 "data_size": 7936 00:21:45.966 }, 00:21:45.966 { 00:21:45.966 "name": "pt2", 00:21:45.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.966 "is_configured": true, 00:21:45.966 "data_offset": 256, 00:21:45.966 "data_size": 7936 00:21:45.966 } 00:21:45.966 ] 00:21:45.966 }' 00:21:45.966 14:21:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:45.966 14:21:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.226 14:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:46.226 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.226 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.226 14:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:46.226 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.226 14:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:46.226 14:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:46.226 14:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:46.226 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.226 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.226 [2024-11-27 14:21:17.176741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 576b6db7-d266-4306-b1dc-5478480dac28 '!=' 576b6db7-d266-4306-b1dc-5478480dac28 ']' 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86427 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86427 ']' 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86427 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86427 00:21:46.486 killing process with pid 86427 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86427' 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86427 00:21:46.486 [2024-11-27 14:21:17.253953] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:46.486 [2024-11-27 14:21:17.254050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.486 [2024-11-27 14:21:17.254098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.486 [2024-11-27 14:21:17.254111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:46.486 14:21:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86427 00:21:46.746 [2024-11-27 14:21:17.462762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:47.685 14:21:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:21:47.685 00:21:47.685 real 0m6.336s 00:21:47.685 user 0m9.607s 00:21:47.685 sys 0m1.128s 00:21:47.685 14:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.685 14:21:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:47.685 ************************************ 00:21:47.685 END TEST raid_superblock_test_4k 00:21:47.685 ************************************ 00:21:47.945 14:21:18 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:21:47.945 14:21:18 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:21:47.945 14:21:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:47.945 14:21:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.945 14:21:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:47.945 ************************************ 00:21:47.945 START TEST raid_rebuild_test_sb_4k 00:21:47.945 ************************************ 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86761 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86761 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86761 ']' 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.945 14:21:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:47.945 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:47.945 Zero copy mechanism will not be used. 00:21:47.945 [2024-11-27 14:21:18.803185] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:21:47.945 [2024-11-27 14:21:18.803304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86761 ] 00:21:48.206 [2024-11-27 14:21:18.979795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.206 [2024-11-27 14:21:19.097949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.465 [2024-11-27 14:21:19.297294] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.465 [2024-11-27 14:21:19.297344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:48.724 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.724 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:21:48.724 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.724 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:21:48.724 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.724 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.983 BaseBdev1_malloc 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.983 [2024-11-27 14:21:19.692301] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:48.983 [2024-11-27 14:21:19.692396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.983 [2024-11-27 14:21:19.692425] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:48.983 [2024-11-27 14:21:19.692438] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.983 [2024-11-27 14:21:19.694673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.983 [2024-11-27 14:21:19.694718] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:48.983 BaseBdev1 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.983 BaseBdev2_malloc 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:48.983 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.984 [2024-11-27 14:21:19.749143] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:48.984 [2024-11-27 14:21:19.749228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.984 [2024-11-27 14:21:19.749254] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:48.984 [2024-11-27 14:21:19.749266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.984 [2024-11-27 14:21:19.751452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.984 [2024-11-27 14:21:19.751499] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:48.984 BaseBdev2 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.984 spare_malloc 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.984 spare_delay 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.984 [2024-11-27 14:21:19.831800] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:48.984 [2024-11-27 14:21:19.831876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.984 [2024-11-27 14:21:19.831917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:48.984 [2024-11-27 14:21:19.831932] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.984 [2024-11-27 14:21:19.834300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.984 [2024-11-27 14:21:19.834346] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:48.984 spare 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.984 [2024-11-27 14:21:19.843840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.984 [2024-11-27 14:21:19.845823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:48.984 [2024-11-27 14:21:19.846032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:48.984 [2024-11-27 14:21:19.846055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:48.984 [2024-11-27 14:21:19.846360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:48.984 [2024-11-27 14:21:19.846558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:48.984 [2024-11-27 14:21:19.846576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:48.984 [2024-11-27 14:21:19.846781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.984 "name": "raid_bdev1", 00:21:48.984 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:48.984 "strip_size_kb": 0, 00:21:48.984 "state": "online", 00:21:48.984 "raid_level": "raid1", 00:21:48.984 "superblock": true, 00:21:48.984 "num_base_bdevs": 2, 00:21:48.984 "num_base_bdevs_discovered": 2, 00:21:48.984 "num_base_bdevs_operational": 2, 00:21:48.984 "base_bdevs_list": [ 00:21:48.984 { 00:21:48.984 "name": "BaseBdev1", 00:21:48.984 "uuid": "99e8fc48-d478-5a29-95b3-1a7c79a1ba33", 00:21:48.984 "is_configured": true, 00:21:48.984 "data_offset": 256, 00:21:48.984 "data_size": 7936 00:21:48.984 }, 00:21:48.984 { 00:21:48.984 "name": "BaseBdev2", 00:21:48.984 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:48.984 "is_configured": true, 00:21:48.984 "data_offset": 256, 00:21:48.984 "data_size": 7936 00:21:48.984 } 00:21:48.984 ] 00:21:48.984 }' 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.984 14:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.552 [2024-11-27 14:21:20.331326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:49.552 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:49.811 [2024-11-27 14:21:20.638561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:49.811 /dev/nbd0 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:49.811 1+0 records in 00:21:49.811 1+0 records out 00:21:49.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314106 s, 13.0 MB/s 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:49.811 14:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:50.747 7936+0 records in 00:21:50.747 7936+0 records out 00:21:50.747 32505856 bytes (33 MB, 31 MiB) copied, 0.657208 s, 49.5 MB/s 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:50.747 [2024-11-27 14:21:21.576272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.747 [2024-11-27 14:21:21.600366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.747 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.747 "name": "raid_bdev1", 00:21:50.747 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:50.747 "strip_size_kb": 0, 00:21:50.747 "state": "online", 00:21:50.747 "raid_level": "raid1", 00:21:50.747 "superblock": true, 00:21:50.747 "num_base_bdevs": 2, 00:21:50.747 "num_base_bdevs_discovered": 1, 00:21:50.747 "num_base_bdevs_operational": 1, 00:21:50.747 "base_bdevs_list": [ 00:21:50.747 { 00:21:50.747 "name": null, 00:21:50.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.747 "is_configured": false, 00:21:50.747 "data_offset": 0, 00:21:50.748 "data_size": 7936 00:21:50.748 }, 00:21:50.748 { 00:21:50.748 "name": "BaseBdev2", 00:21:50.748 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:50.748 "is_configured": true, 00:21:50.748 "data_offset": 256, 00:21:50.748 "data_size": 7936 00:21:50.748 } 00:21:50.748 ] 00:21:50.748 }' 00:21:50.748 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.748 14:21:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:51.322 14:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:51.322 14:21:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.322 14:21:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:51.322 [2024-11-27 14:21:22.055626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.322 [2024-11-27 14:21:22.073914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:51.322 14:21:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.322 14:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:51.322 [2024-11-27 14:21:22.075962] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:52.273 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.273 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.273 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:52.273 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:52.273 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.273 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.273 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.273 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.273 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.273 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.273 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.273 "name": "raid_bdev1", 00:21:52.273 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:52.273 "strip_size_kb": 0, 00:21:52.273 "state": "online", 00:21:52.273 "raid_level": "raid1", 00:21:52.273 "superblock": true, 00:21:52.273 "num_base_bdevs": 2, 00:21:52.273 "num_base_bdevs_discovered": 2, 00:21:52.273 "num_base_bdevs_operational": 2, 00:21:52.273 "process": { 00:21:52.273 "type": "rebuild", 00:21:52.273 "target": "spare", 00:21:52.273 "progress": { 00:21:52.273 "blocks": 2560, 00:21:52.273 "percent": 32 00:21:52.273 } 00:21:52.273 }, 00:21:52.273 "base_bdevs_list": [ 00:21:52.273 { 00:21:52.273 "name": "spare", 00:21:52.273 "uuid": "b051219e-421c-5db6-b6f8-f4df99846dbb", 00:21:52.273 "is_configured": true, 00:21:52.273 "data_offset": 256, 00:21:52.273 "data_size": 7936 00:21:52.274 }, 00:21:52.274 { 00:21:52.274 "name": "BaseBdev2", 00:21:52.274 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:52.274 "is_configured": true, 00:21:52.274 "data_offset": 256, 00:21:52.274 "data_size": 7936 00:21:52.274 } 00:21:52.274 ] 00:21:52.274 }' 00:21:52.274 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.274 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.274 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.274 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.274 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:52.274 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.274 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.274 [2024-11-27 14:21:23.215664] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.534 [2024-11-27 14:21:23.282032] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:52.534 [2024-11-27 14:21:23.282156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.534 [2024-11-27 14:21:23.282173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.534 [2024-11-27 14:21:23.282183] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.534 "name": "raid_bdev1", 00:21:52.534 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:52.534 "strip_size_kb": 0, 00:21:52.534 "state": "online", 00:21:52.534 "raid_level": "raid1", 00:21:52.534 "superblock": true, 00:21:52.534 "num_base_bdevs": 2, 00:21:52.534 "num_base_bdevs_discovered": 1, 00:21:52.534 "num_base_bdevs_operational": 1, 00:21:52.534 "base_bdevs_list": [ 00:21:52.534 { 00:21:52.534 "name": null, 00:21:52.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.534 "is_configured": false, 00:21:52.534 "data_offset": 0, 00:21:52.534 "data_size": 7936 00:21:52.534 }, 00:21:52.534 { 00:21:52.534 "name": "BaseBdev2", 00:21:52.534 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:52.534 "is_configured": true, 00:21:52.534 "data_offset": 256, 00:21:52.534 "data_size": 7936 00:21:52.534 } 00:21:52.534 ] 00:21:52.534 }' 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.534 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.106 "name": "raid_bdev1", 00:21:53.106 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:53.106 "strip_size_kb": 0, 00:21:53.106 "state": "online", 00:21:53.106 "raid_level": "raid1", 00:21:53.106 "superblock": true, 00:21:53.106 "num_base_bdevs": 2, 00:21:53.106 "num_base_bdevs_discovered": 1, 00:21:53.106 "num_base_bdevs_operational": 1, 00:21:53.106 "base_bdevs_list": [ 00:21:53.106 { 00:21:53.106 "name": null, 00:21:53.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.106 "is_configured": false, 00:21:53.106 "data_offset": 0, 00:21:53.106 "data_size": 7936 00:21:53.106 }, 00:21:53.106 { 00:21:53.106 "name": "BaseBdev2", 00:21:53.106 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:53.106 "is_configured": true, 00:21:53.106 "data_offset": 256, 00:21:53.106 "data_size": 7936 00:21:53.106 } 00:21:53.106 ] 00:21:53.106 }' 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.106 [2024-11-27 14:21:23.953897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:53.106 [2024-11-27 14:21:23.970915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.106 14:21:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:53.106 [2024-11-27 14:21:23.972830] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:54.046 14:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.046 14:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.046 14:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.046 14:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.046 14:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.046 14:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.046 14:21:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.046 14:21:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.046 14:21:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.305 14:21:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.305 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.305 "name": "raid_bdev1", 00:21:54.305 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:54.305 "strip_size_kb": 0, 00:21:54.305 "state": "online", 00:21:54.305 "raid_level": "raid1", 00:21:54.305 "superblock": true, 00:21:54.305 "num_base_bdevs": 2, 00:21:54.305 "num_base_bdevs_discovered": 2, 00:21:54.305 "num_base_bdevs_operational": 2, 00:21:54.305 "process": { 00:21:54.305 "type": "rebuild", 00:21:54.305 "target": "spare", 00:21:54.305 "progress": { 00:21:54.305 "blocks": 2560, 00:21:54.305 "percent": 32 00:21:54.305 } 00:21:54.305 }, 00:21:54.305 "base_bdevs_list": [ 00:21:54.305 { 00:21:54.305 "name": "spare", 00:21:54.305 "uuid": "b051219e-421c-5db6-b6f8-f4df99846dbb", 00:21:54.305 "is_configured": true, 00:21:54.306 "data_offset": 256, 00:21:54.306 "data_size": 7936 00:21:54.306 }, 00:21:54.306 { 00:21:54.306 "name": "BaseBdev2", 00:21:54.306 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:54.306 "is_configured": true, 00:21:54.306 "data_offset": 256, 00:21:54.306 "data_size": 7936 00:21:54.306 } 00:21:54.306 ] 00:21:54.306 }' 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:54.306 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=693 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.306 "name": "raid_bdev1", 00:21:54.306 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:54.306 "strip_size_kb": 0, 00:21:54.306 "state": "online", 00:21:54.306 "raid_level": "raid1", 00:21:54.306 "superblock": true, 00:21:54.306 "num_base_bdevs": 2, 00:21:54.306 "num_base_bdevs_discovered": 2, 00:21:54.306 "num_base_bdevs_operational": 2, 00:21:54.306 "process": { 00:21:54.306 "type": "rebuild", 00:21:54.306 "target": "spare", 00:21:54.306 "progress": { 00:21:54.306 "blocks": 2816, 00:21:54.306 "percent": 35 00:21:54.306 } 00:21:54.306 }, 00:21:54.306 "base_bdevs_list": [ 00:21:54.306 { 00:21:54.306 "name": "spare", 00:21:54.306 "uuid": "b051219e-421c-5db6-b6f8-f4df99846dbb", 00:21:54.306 "is_configured": true, 00:21:54.306 "data_offset": 256, 00:21:54.306 "data_size": 7936 00:21:54.306 }, 00:21:54.306 { 00:21:54.306 "name": "BaseBdev2", 00:21:54.306 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:54.306 "is_configured": true, 00:21:54.306 "data_offset": 256, 00:21:54.306 "data_size": 7936 00:21:54.306 } 00:21:54.306 ] 00:21:54.306 }' 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.306 14:21:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:55.682 "name": "raid_bdev1", 00:21:55.682 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:55.682 "strip_size_kb": 0, 00:21:55.682 "state": "online", 00:21:55.682 "raid_level": "raid1", 00:21:55.682 "superblock": true, 00:21:55.682 "num_base_bdevs": 2, 00:21:55.682 "num_base_bdevs_discovered": 2, 00:21:55.682 "num_base_bdevs_operational": 2, 00:21:55.682 "process": { 00:21:55.682 "type": "rebuild", 00:21:55.682 "target": "spare", 00:21:55.682 "progress": { 00:21:55.682 "blocks": 5632, 00:21:55.682 "percent": 70 00:21:55.682 } 00:21:55.682 }, 00:21:55.682 "base_bdevs_list": [ 00:21:55.682 { 00:21:55.682 "name": "spare", 00:21:55.682 "uuid": "b051219e-421c-5db6-b6f8-f4df99846dbb", 00:21:55.682 "is_configured": true, 00:21:55.682 "data_offset": 256, 00:21:55.682 "data_size": 7936 00:21:55.682 }, 00:21:55.682 { 00:21:55.682 "name": "BaseBdev2", 00:21:55.682 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:55.682 "is_configured": true, 00:21:55.682 "data_offset": 256, 00:21:55.682 "data_size": 7936 00:21:55.682 } 00:21:55.682 ] 00:21:55.682 }' 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:55.682 14:21:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:56.249 [2024-11-27 14:21:27.088107] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:56.249 [2024-11-27 14:21:27.088210] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:56.249 [2024-11-27 14:21:27.088368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.508 "name": "raid_bdev1", 00:21:56.508 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:56.508 "strip_size_kb": 0, 00:21:56.508 "state": "online", 00:21:56.508 "raid_level": "raid1", 00:21:56.508 "superblock": true, 00:21:56.508 "num_base_bdevs": 2, 00:21:56.508 "num_base_bdevs_discovered": 2, 00:21:56.508 "num_base_bdevs_operational": 2, 00:21:56.508 "base_bdevs_list": [ 00:21:56.508 { 00:21:56.508 "name": "spare", 00:21:56.508 "uuid": "b051219e-421c-5db6-b6f8-f4df99846dbb", 00:21:56.508 "is_configured": true, 00:21:56.508 "data_offset": 256, 00:21:56.508 "data_size": 7936 00:21:56.508 }, 00:21:56.508 { 00:21:56.508 "name": "BaseBdev2", 00:21:56.508 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:56.508 "is_configured": true, 00:21:56.508 "data_offset": 256, 00:21:56.508 "data_size": 7936 00:21:56.508 } 00:21:56.508 ] 00:21:56.508 }' 00:21:56.508 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.767 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.767 "name": "raid_bdev1", 00:21:56.767 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:56.767 "strip_size_kb": 0, 00:21:56.767 "state": "online", 00:21:56.767 "raid_level": "raid1", 00:21:56.767 "superblock": true, 00:21:56.767 "num_base_bdevs": 2, 00:21:56.767 "num_base_bdevs_discovered": 2, 00:21:56.768 "num_base_bdevs_operational": 2, 00:21:56.768 "base_bdevs_list": [ 00:21:56.768 { 00:21:56.768 "name": "spare", 00:21:56.768 "uuid": "b051219e-421c-5db6-b6f8-f4df99846dbb", 00:21:56.768 "is_configured": true, 00:21:56.768 "data_offset": 256, 00:21:56.768 "data_size": 7936 00:21:56.768 }, 00:21:56.768 { 00:21:56.768 "name": "BaseBdev2", 00:21:56.768 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:56.768 "is_configured": true, 00:21:56.768 "data_offset": 256, 00:21:56.768 "data_size": 7936 00:21:56.768 } 00:21:56.768 ] 00:21:56.768 }' 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.768 "name": "raid_bdev1", 00:21:56.768 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:56.768 "strip_size_kb": 0, 00:21:56.768 "state": "online", 00:21:56.768 "raid_level": "raid1", 00:21:56.768 "superblock": true, 00:21:56.768 "num_base_bdevs": 2, 00:21:56.768 "num_base_bdevs_discovered": 2, 00:21:56.768 "num_base_bdevs_operational": 2, 00:21:56.768 "base_bdevs_list": [ 00:21:56.768 { 00:21:56.768 "name": "spare", 00:21:56.768 "uuid": "b051219e-421c-5db6-b6f8-f4df99846dbb", 00:21:56.768 "is_configured": true, 00:21:56.768 "data_offset": 256, 00:21:56.768 "data_size": 7936 00:21:56.768 }, 00:21:56.768 { 00:21:56.768 "name": "BaseBdev2", 00:21:56.768 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:56.768 "is_configured": true, 00:21:56.768 "data_offset": 256, 00:21:56.768 "data_size": 7936 00:21:56.768 } 00:21:56.768 ] 00:21:56.768 }' 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.768 14:21:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.336 [2024-11-27 14:21:28.135997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.336 [2024-11-27 14:21:28.136040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.336 [2024-11-27 14:21:28.136158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.336 [2024-11-27 14:21:28.136234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.336 [2024-11-27 14:21:28.136249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:57.336 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:57.597 /dev/nbd0 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:57.597 1+0 records in 00:21:57.597 1+0 records out 00:21:57.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454759 s, 9.0 MB/s 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:57.597 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:57.857 /dev/nbd1 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:57.857 1+0 records in 00:21:57.857 1+0 records out 00:21:57.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307083 s, 13.3 MB/s 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:57.857 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:58.117 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:58.117 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:58.117 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:58.117 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:58.117 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:58.117 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:58.117 14:21:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:58.404 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:58.404 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:58.404 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:58.404 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:58.404 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:58.404 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:58.404 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:58.404 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:58.404 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:58.404 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.701 [2024-11-27 14:21:29.376070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:58.701 [2024-11-27 14:21:29.376157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:58.701 [2024-11-27 14:21:29.376187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:58.701 [2024-11-27 14:21:29.376198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:58.701 [2024-11-27 14:21:29.378556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:58.701 [2024-11-27 14:21:29.378599] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:58.701 [2024-11-27 14:21:29.378705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:58.701 [2024-11-27 14:21:29.378759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:58.701 [2024-11-27 14:21:29.378912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:58.701 spare 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.701 [2024-11-27 14:21:29.478845] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:58.701 [2024-11-27 14:21:29.478920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:58.701 [2024-11-27 14:21:29.479300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:58.701 [2024-11-27 14:21:29.479564] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:58.701 [2024-11-27 14:21:29.479583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:58.701 [2024-11-27 14:21:29.479791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.701 "name": "raid_bdev1", 00:21:58.701 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:58.701 "strip_size_kb": 0, 00:21:58.701 "state": "online", 00:21:58.701 "raid_level": "raid1", 00:21:58.701 "superblock": true, 00:21:58.701 "num_base_bdevs": 2, 00:21:58.701 "num_base_bdevs_discovered": 2, 00:21:58.701 "num_base_bdevs_operational": 2, 00:21:58.701 "base_bdevs_list": [ 00:21:58.701 { 00:21:58.701 "name": "spare", 00:21:58.701 "uuid": "b051219e-421c-5db6-b6f8-f4df99846dbb", 00:21:58.701 "is_configured": true, 00:21:58.701 "data_offset": 256, 00:21:58.701 "data_size": 7936 00:21:58.701 }, 00:21:58.701 { 00:21:58.701 "name": "BaseBdev2", 00:21:58.701 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:58.701 "is_configured": true, 00:21:58.701 "data_offset": 256, 00:21:58.701 "data_size": 7936 00:21:58.701 } 00:21:58.701 ] 00:21:58.701 }' 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.701 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.960 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:58.960 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.960 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:58.960 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:58.960 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.960 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.960 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.960 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.960 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.219 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.219 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.219 "name": "raid_bdev1", 00:21:59.219 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:59.219 "strip_size_kb": 0, 00:21:59.219 "state": "online", 00:21:59.219 "raid_level": "raid1", 00:21:59.219 "superblock": true, 00:21:59.219 "num_base_bdevs": 2, 00:21:59.219 "num_base_bdevs_discovered": 2, 00:21:59.219 "num_base_bdevs_operational": 2, 00:21:59.219 "base_bdevs_list": [ 00:21:59.219 { 00:21:59.219 "name": "spare", 00:21:59.219 "uuid": "b051219e-421c-5db6-b6f8-f4df99846dbb", 00:21:59.219 "is_configured": true, 00:21:59.219 "data_offset": 256, 00:21:59.219 "data_size": 7936 00:21:59.219 }, 00:21:59.219 { 00:21:59.219 "name": "BaseBdev2", 00:21:59.219 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:59.219 "is_configured": true, 00:21:59.219 "data_offset": 256, 00:21:59.219 "data_size": 7936 00:21:59.219 } 00:21:59.219 ] 00:21:59.219 }' 00:21:59.219 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.219 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:59.219 14:21:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.219 [2024-11-27 14:21:30.083114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.219 "name": "raid_bdev1", 00:21:59.219 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:21:59.219 "strip_size_kb": 0, 00:21:59.219 "state": "online", 00:21:59.219 "raid_level": "raid1", 00:21:59.219 "superblock": true, 00:21:59.219 "num_base_bdevs": 2, 00:21:59.219 "num_base_bdevs_discovered": 1, 00:21:59.219 "num_base_bdevs_operational": 1, 00:21:59.219 "base_bdevs_list": [ 00:21:59.219 { 00:21:59.219 "name": null, 00:21:59.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.219 "is_configured": false, 00:21:59.219 "data_offset": 0, 00:21:59.219 "data_size": 7936 00:21:59.219 }, 00:21:59.219 { 00:21:59.219 "name": "BaseBdev2", 00:21:59.219 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:21:59.219 "is_configured": true, 00:21:59.219 "data_offset": 256, 00:21:59.219 "data_size": 7936 00:21:59.219 } 00:21:59.219 ] 00:21:59.219 }' 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.219 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.787 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:59.787 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.787 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.787 [2024-11-27 14:21:30.542346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:59.787 [2024-11-27 14:21:30.542597] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:59.787 [2024-11-27 14:21:30.542622] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:59.787 [2024-11-27 14:21:30.542657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:59.787 [2024-11-27 14:21:30.560386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:59.787 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.787 14:21:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:59.787 [2024-11-27 14:21:30.562572] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:00.725 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:00.725 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:00.725 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:00.725 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:00.725 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:00.725 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.725 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.725 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.725 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.725 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.725 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:00.725 "name": "raid_bdev1", 00:22:00.725 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:22:00.725 "strip_size_kb": 0, 00:22:00.725 "state": "online", 00:22:00.725 "raid_level": "raid1", 00:22:00.725 "superblock": true, 00:22:00.725 "num_base_bdevs": 2, 00:22:00.726 "num_base_bdevs_discovered": 2, 00:22:00.726 "num_base_bdevs_operational": 2, 00:22:00.726 "process": { 00:22:00.726 "type": "rebuild", 00:22:00.726 "target": "spare", 00:22:00.726 "progress": { 00:22:00.726 "blocks": 2560, 00:22:00.726 "percent": 32 00:22:00.726 } 00:22:00.726 }, 00:22:00.726 "base_bdevs_list": [ 00:22:00.726 { 00:22:00.726 "name": "spare", 00:22:00.726 "uuid": "b051219e-421c-5db6-b6f8-f4df99846dbb", 00:22:00.726 "is_configured": true, 00:22:00.726 "data_offset": 256, 00:22:00.726 "data_size": 7936 00:22:00.726 }, 00:22:00.726 { 00:22:00.726 "name": "BaseBdev2", 00:22:00.726 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:22:00.726 "is_configured": true, 00:22:00.726 "data_offset": 256, 00:22:00.726 "data_size": 7936 00:22:00.726 } 00:22:00.726 ] 00:22:00.726 }' 00:22:00.726 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:00.726 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:00.726 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:00.985 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:00.985 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:00.985 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.985 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.986 [2024-11-27 14:21:31.726274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:00.986 [2024-11-27 14:21:31.768550] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:00.986 [2024-11-27 14:21:31.768640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.986 [2024-11-27 14:21:31.768655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:00.986 [2024-11-27 14:21:31.768664] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.986 "name": "raid_bdev1", 00:22:00.986 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:22:00.986 "strip_size_kb": 0, 00:22:00.986 "state": "online", 00:22:00.986 "raid_level": "raid1", 00:22:00.986 "superblock": true, 00:22:00.986 "num_base_bdevs": 2, 00:22:00.986 "num_base_bdevs_discovered": 1, 00:22:00.986 "num_base_bdevs_operational": 1, 00:22:00.986 "base_bdevs_list": [ 00:22:00.986 { 00:22:00.986 "name": null, 00:22:00.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.986 "is_configured": false, 00:22:00.986 "data_offset": 0, 00:22:00.986 "data_size": 7936 00:22:00.986 }, 00:22:00.986 { 00:22:00.986 "name": "BaseBdev2", 00:22:00.986 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:22:00.986 "is_configured": true, 00:22:00.986 "data_offset": 256, 00:22:00.986 "data_size": 7936 00:22:00.986 } 00:22:00.986 ] 00:22:00.986 }' 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.986 14:21:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.554 14:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:01.554 14:21:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.554 14:21:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.554 [2024-11-27 14:21:32.259425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:01.554 [2024-11-27 14:21:32.259508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.554 [2024-11-27 14:21:32.259529] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:01.554 [2024-11-27 14:21:32.259540] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.554 [2024-11-27 14:21:32.260009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.554 [2024-11-27 14:21:32.260044] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:01.554 [2024-11-27 14:21:32.260154] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:01.554 [2024-11-27 14:21:32.260173] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:01.554 [2024-11-27 14:21:32.260186] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:01.554 [2024-11-27 14:21:32.260213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:01.554 [2024-11-27 14:21:32.276165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:01.554 spare 00:22:01.554 14:21:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.554 14:21:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:01.554 [2024-11-27 14:21:32.277971] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:02.495 "name": "raid_bdev1", 00:22:02.495 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:22:02.495 "strip_size_kb": 0, 00:22:02.495 "state": "online", 00:22:02.495 "raid_level": "raid1", 00:22:02.495 "superblock": true, 00:22:02.495 "num_base_bdevs": 2, 00:22:02.495 "num_base_bdevs_discovered": 2, 00:22:02.495 "num_base_bdevs_operational": 2, 00:22:02.495 "process": { 00:22:02.495 "type": "rebuild", 00:22:02.495 "target": "spare", 00:22:02.495 "progress": { 00:22:02.495 "blocks": 2560, 00:22:02.495 "percent": 32 00:22:02.495 } 00:22:02.495 }, 00:22:02.495 "base_bdevs_list": [ 00:22:02.495 { 00:22:02.495 "name": "spare", 00:22:02.495 "uuid": "b051219e-421c-5db6-b6f8-f4df99846dbb", 00:22:02.495 "is_configured": true, 00:22:02.495 "data_offset": 256, 00:22:02.495 "data_size": 7936 00:22:02.495 }, 00:22:02.495 { 00:22:02.495 "name": "BaseBdev2", 00:22:02.495 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:22:02.495 "is_configured": true, 00:22:02.495 "data_offset": 256, 00:22:02.495 "data_size": 7936 00:22:02.495 } 00:22:02.495 ] 00:22:02.495 }' 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.495 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.495 [2024-11-27 14:21:33.413615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:02.755 [2024-11-27 14:21:33.483329] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:02.755 [2024-11-27 14:21:33.483421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.755 [2024-11-27 14:21:33.483439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:02.755 [2024-11-27 14:21:33.483447] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.755 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.756 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.756 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.756 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.756 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.756 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.756 "name": "raid_bdev1", 00:22:02.756 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:22:02.756 "strip_size_kb": 0, 00:22:02.756 "state": "online", 00:22:02.756 "raid_level": "raid1", 00:22:02.756 "superblock": true, 00:22:02.756 "num_base_bdevs": 2, 00:22:02.756 "num_base_bdevs_discovered": 1, 00:22:02.756 "num_base_bdevs_operational": 1, 00:22:02.756 "base_bdevs_list": [ 00:22:02.756 { 00:22:02.756 "name": null, 00:22:02.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.756 "is_configured": false, 00:22:02.756 "data_offset": 0, 00:22:02.756 "data_size": 7936 00:22:02.756 }, 00:22:02.756 { 00:22:02.756 "name": "BaseBdev2", 00:22:02.756 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:22:02.756 "is_configured": true, 00:22:02.756 "data_offset": 256, 00:22:02.756 "data_size": 7936 00:22:02.756 } 00:22:02.756 ] 00:22:02.756 }' 00:22:02.756 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.756 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.326 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:03.326 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.326 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:03.326 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:03.326 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.326 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.326 14:21:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.326 "name": "raid_bdev1", 00:22:03.326 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:22:03.326 "strip_size_kb": 0, 00:22:03.326 "state": "online", 00:22:03.326 "raid_level": "raid1", 00:22:03.326 "superblock": true, 00:22:03.326 "num_base_bdevs": 2, 00:22:03.326 "num_base_bdevs_discovered": 1, 00:22:03.326 "num_base_bdevs_operational": 1, 00:22:03.326 "base_bdevs_list": [ 00:22:03.326 { 00:22:03.326 "name": null, 00:22:03.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.326 "is_configured": false, 00:22:03.326 "data_offset": 0, 00:22:03.326 "data_size": 7936 00:22:03.326 }, 00:22:03.326 { 00:22:03.326 "name": "BaseBdev2", 00:22:03.326 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:22:03.326 "is_configured": true, 00:22:03.326 "data_offset": 256, 00:22:03.326 "data_size": 7936 00:22:03.326 } 00:22:03.326 ] 00:22:03.326 }' 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.326 [2024-11-27 14:21:34.129231] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:03.326 [2024-11-27 14:21:34.129291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.326 [2024-11-27 14:21:34.129319] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:03.326 [2024-11-27 14:21:34.129338] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.326 [2024-11-27 14:21:34.129777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.326 [2024-11-27 14:21:34.129802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:03.326 [2024-11-27 14:21:34.129885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:03.326 [2024-11-27 14:21:34.129916] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:03.326 [2024-11-27 14:21:34.129928] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:03.326 [2024-11-27 14:21:34.129938] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:03.326 BaseBdev1 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.326 14:21:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.266 "name": "raid_bdev1", 00:22:04.266 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:22:04.266 "strip_size_kb": 0, 00:22:04.266 "state": "online", 00:22:04.266 "raid_level": "raid1", 00:22:04.266 "superblock": true, 00:22:04.266 "num_base_bdevs": 2, 00:22:04.266 "num_base_bdevs_discovered": 1, 00:22:04.266 "num_base_bdevs_operational": 1, 00:22:04.266 "base_bdevs_list": [ 00:22:04.266 { 00:22:04.266 "name": null, 00:22:04.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.266 "is_configured": false, 00:22:04.266 "data_offset": 0, 00:22:04.266 "data_size": 7936 00:22:04.266 }, 00:22:04.266 { 00:22:04.266 "name": "BaseBdev2", 00:22:04.266 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:22:04.266 "is_configured": true, 00:22:04.266 "data_offset": 256, 00:22:04.266 "data_size": 7936 00:22:04.266 } 00:22:04.266 ] 00:22:04.266 }' 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.266 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:04.835 "name": "raid_bdev1", 00:22:04.835 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:22:04.835 "strip_size_kb": 0, 00:22:04.835 "state": "online", 00:22:04.835 "raid_level": "raid1", 00:22:04.835 "superblock": true, 00:22:04.835 "num_base_bdevs": 2, 00:22:04.835 "num_base_bdevs_discovered": 1, 00:22:04.835 "num_base_bdevs_operational": 1, 00:22:04.835 "base_bdevs_list": [ 00:22:04.835 { 00:22:04.835 "name": null, 00:22:04.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.835 "is_configured": false, 00:22:04.835 "data_offset": 0, 00:22:04.835 "data_size": 7936 00:22:04.835 }, 00:22:04.835 { 00:22:04.835 "name": "BaseBdev2", 00:22:04.835 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:22:04.835 "is_configured": true, 00:22:04.835 "data_offset": 256, 00:22:04.835 "data_size": 7936 00:22:04.835 } 00:22:04.835 ] 00:22:04.835 }' 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:04.835 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:04.836 [2024-11-27 14:21:35.702629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:04.836 [2024-11-27 14:21:35.702834] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:04.836 [2024-11-27 14:21:35.702859] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:04.836 request: 00:22:04.836 { 00:22:04.836 "base_bdev": "BaseBdev1", 00:22:04.836 "raid_bdev": "raid_bdev1", 00:22:04.836 "method": "bdev_raid_add_base_bdev", 00:22:04.836 "req_id": 1 00:22:04.836 } 00:22:04.836 Got JSON-RPC error response 00:22:04.836 response: 00:22:04.836 { 00:22:04.836 "code": -22, 00:22:04.836 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:04.836 } 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:04.836 14:21:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:05.774 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:05.774 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:05.774 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:05.774 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:05.774 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:05.775 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:05.775 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.775 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.775 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.775 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.775 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.775 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.775 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.775 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.033 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.033 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.033 "name": "raid_bdev1", 00:22:06.033 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:22:06.033 "strip_size_kb": 0, 00:22:06.033 "state": "online", 00:22:06.033 "raid_level": "raid1", 00:22:06.033 "superblock": true, 00:22:06.033 "num_base_bdevs": 2, 00:22:06.033 "num_base_bdevs_discovered": 1, 00:22:06.033 "num_base_bdevs_operational": 1, 00:22:06.033 "base_bdevs_list": [ 00:22:06.033 { 00:22:06.033 "name": null, 00:22:06.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.033 "is_configured": false, 00:22:06.033 "data_offset": 0, 00:22:06.033 "data_size": 7936 00:22:06.033 }, 00:22:06.033 { 00:22:06.033 "name": "BaseBdev2", 00:22:06.033 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:22:06.033 "is_configured": true, 00:22:06.033 "data_offset": 256, 00:22:06.033 "data_size": 7936 00:22:06.033 } 00:22:06.033 ] 00:22:06.033 }' 00:22:06.033 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.033 14:21:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.292 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:06.292 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:06.292 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:06.292 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:06.292 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:06.292 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.292 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.292 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.292 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.292 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.292 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:06.292 "name": "raid_bdev1", 00:22:06.292 "uuid": "61621d5e-0df5-44ca-9c09-4b7ace76fcea", 00:22:06.292 "strip_size_kb": 0, 00:22:06.292 "state": "online", 00:22:06.292 "raid_level": "raid1", 00:22:06.292 "superblock": true, 00:22:06.292 "num_base_bdevs": 2, 00:22:06.292 "num_base_bdevs_discovered": 1, 00:22:06.292 "num_base_bdevs_operational": 1, 00:22:06.292 "base_bdevs_list": [ 00:22:06.292 { 00:22:06.292 "name": null, 00:22:06.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.292 "is_configured": false, 00:22:06.292 "data_offset": 0, 00:22:06.292 "data_size": 7936 00:22:06.292 }, 00:22:06.292 { 00:22:06.292 "name": "BaseBdev2", 00:22:06.293 "uuid": "ac20a8cd-9042-58f7-8287-82c66299ed81", 00:22:06.293 "is_configured": true, 00:22:06.293 "data_offset": 256, 00:22:06.293 "data_size": 7936 00:22:06.293 } 00:22:06.293 ] 00:22:06.293 }' 00:22:06.293 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:06.551 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86761 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86761 ']' 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86761 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86761 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:06.552 killing process with pid 86761 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86761' 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86761 00:22:06.552 Received shutdown signal, test time was about 60.000000 seconds 00:22:06.552 00:22:06.552 Latency(us) 00:22:06.552 [2024-11-27T14:21:37.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.552 [2024-11-27T14:21:37.508Z] =================================================================================================================== 00:22:06.552 [2024-11-27T14:21:37.508Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:06.552 [2024-11-27 14:21:37.358529] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:06.552 14:21:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86761 00:22:06.552 [2024-11-27 14:21:37.358688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:06.552 [2024-11-27 14:21:37.358747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:06.552 [2024-11-27 14:21:37.358762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:06.816 [2024-11-27 14:21:37.676556] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:08.219 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:22:08.220 00:22:08.220 real 0m20.162s 00:22:08.220 user 0m26.333s 00:22:08.220 sys 0m2.683s 00:22:08.220 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.220 14:21:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:08.220 ************************************ 00:22:08.220 END TEST raid_rebuild_test_sb_4k 00:22:08.220 ************************************ 00:22:08.220 14:21:38 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:22:08.220 14:21:38 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:22:08.220 14:21:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:08.220 14:21:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.220 14:21:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:08.220 ************************************ 00:22:08.220 START TEST raid_state_function_test_sb_md_separate 00:22:08.220 ************************************ 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87446 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87446' 00:22:08.220 Process raid pid: 87446 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87446 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87446 ']' 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.220 14:21:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.220 [2024-11-27 14:21:39.040094] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:08.220 [2024-11-27 14:21:39.040231] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.477 [2024-11-27 14:21:39.217233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.477 [2024-11-27 14:21:39.335826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.736 [2024-11-27 14:21:39.544348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:08.736 [2024-11-27 14:21:39.544399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.996 [2024-11-27 14:21:39.927576] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:08.996 [2024-11-27 14:21:39.927665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:08.996 [2024-11-27 14:21:39.927676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:08.996 [2024-11-27 14:21:39.927687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.996 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.256 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.256 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.256 "name": "Existed_Raid", 00:22:09.256 "uuid": "7a01987a-f0b3-42a1-b60a-bff51232e811", 00:22:09.256 "strip_size_kb": 0, 00:22:09.256 "state": "configuring", 00:22:09.256 "raid_level": "raid1", 00:22:09.256 "superblock": true, 00:22:09.256 "num_base_bdevs": 2, 00:22:09.256 "num_base_bdevs_discovered": 0, 00:22:09.256 "num_base_bdevs_operational": 2, 00:22:09.256 "base_bdevs_list": [ 00:22:09.256 { 00:22:09.256 "name": "BaseBdev1", 00:22:09.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.256 "is_configured": false, 00:22:09.256 "data_offset": 0, 00:22:09.256 "data_size": 0 00:22:09.256 }, 00:22:09.256 { 00:22:09.256 "name": "BaseBdev2", 00:22:09.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.256 "is_configured": false, 00:22:09.256 "data_offset": 0, 00:22:09.256 "data_size": 0 00:22:09.256 } 00:22:09.256 ] 00:22:09.256 }' 00:22:09.256 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.256 14:21:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.516 [2024-11-27 14:21:40.346818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:09.516 [2024-11-27 14:21:40.346867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.516 [2024-11-27 14:21:40.354839] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:09.516 [2024-11-27 14:21:40.354895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:09.516 [2024-11-27 14:21:40.354906] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:09.516 [2024-11-27 14:21:40.354920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.516 [2024-11-27 14:21:40.402907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:09.516 BaseBdev1 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.516 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.516 [ 00:22:09.516 { 00:22:09.516 "name": "BaseBdev1", 00:22:09.516 "aliases": [ 00:22:09.516 "70060d8f-2e0e-4ffa-8736-e4d2411ecb4e" 00:22:09.516 ], 00:22:09.516 "product_name": "Malloc disk", 00:22:09.516 "block_size": 4096, 00:22:09.516 "num_blocks": 8192, 00:22:09.516 "uuid": "70060d8f-2e0e-4ffa-8736-e4d2411ecb4e", 00:22:09.516 "md_size": 32, 00:22:09.516 "md_interleave": false, 00:22:09.516 "dif_type": 0, 00:22:09.516 "assigned_rate_limits": { 00:22:09.516 "rw_ios_per_sec": 0, 00:22:09.516 "rw_mbytes_per_sec": 0, 00:22:09.516 "r_mbytes_per_sec": 0, 00:22:09.516 "w_mbytes_per_sec": 0 00:22:09.516 }, 00:22:09.516 "claimed": true, 00:22:09.516 "claim_type": "exclusive_write", 00:22:09.516 "zoned": false, 00:22:09.516 "supported_io_types": { 00:22:09.516 "read": true, 00:22:09.516 "write": true, 00:22:09.516 "unmap": true, 00:22:09.517 "flush": true, 00:22:09.517 "reset": true, 00:22:09.517 "nvme_admin": false, 00:22:09.517 "nvme_io": false, 00:22:09.517 "nvme_io_md": false, 00:22:09.517 "write_zeroes": true, 00:22:09.517 "zcopy": true, 00:22:09.517 "get_zone_info": false, 00:22:09.517 "zone_management": false, 00:22:09.517 "zone_append": false, 00:22:09.517 "compare": false, 00:22:09.517 "compare_and_write": false, 00:22:09.517 "abort": true, 00:22:09.517 "seek_hole": false, 00:22:09.517 "seek_data": false, 00:22:09.517 "copy": true, 00:22:09.517 "nvme_iov_md": false 00:22:09.517 }, 00:22:09.517 "memory_domains": [ 00:22:09.517 { 00:22:09.517 "dma_device_id": "system", 00:22:09.517 "dma_device_type": 1 00:22:09.517 }, 00:22:09.517 { 00:22:09.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.517 "dma_device_type": 2 00:22:09.517 } 00:22:09.517 ], 00:22:09.517 "driver_specific": {} 00:22:09.517 } 00:22:09.517 ] 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.517 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.776 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.776 "name": "Existed_Raid", 00:22:09.776 "uuid": "b4fe112c-1518-4772-a1b4-979ae87737e9", 00:22:09.776 "strip_size_kb": 0, 00:22:09.776 "state": "configuring", 00:22:09.776 "raid_level": "raid1", 00:22:09.776 "superblock": true, 00:22:09.776 "num_base_bdevs": 2, 00:22:09.776 "num_base_bdevs_discovered": 1, 00:22:09.776 "num_base_bdevs_operational": 2, 00:22:09.776 "base_bdevs_list": [ 00:22:09.776 { 00:22:09.776 "name": "BaseBdev1", 00:22:09.776 "uuid": "70060d8f-2e0e-4ffa-8736-e4d2411ecb4e", 00:22:09.776 "is_configured": true, 00:22:09.776 "data_offset": 256, 00:22:09.776 "data_size": 7936 00:22:09.776 }, 00:22:09.776 { 00:22:09.776 "name": "BaseBdev2", 00:22:09.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.776 "is_configured": false, 00:22:09.776 "data_offset": 0, 00:22:09.776 "data_size": 0 00:22:09.776 } 00:22:09.776 ] 00:22:09.776 }' 00:22:09.776 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.776 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.036 [2024-11-27 14:21:40.898177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:10.036 [2024-11-27 14:21:40.898242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.036 [2024-11-27 14:21:40.906205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:10.036 [2024-11-27 14:21:40.908248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:10.036 [2024-11-27 14:21:40.908298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.036 "name": "Existed_Raid", 00:22:10.036 "uuid": "b4919f44-7532-418c-8952-1de70feecb05", 00:22:10.036 "strip_size_kb": 0, 00:22:10.036 "state": "configuring", 00:22:10.036 "raid_level": "raid1", 00:22:10.036 "superblock": true, 00:22:10.036 "num_base_bdevs": 2, 00:22:10.036 "num_base_bdevs_discovered": 1, 00:22:10.036 "num_base_bdevs_operational": 2, 00:22:10.036 "base_bdevs_list": [ 00:22:10.036 { 00:22:10.036 "name": "BaseBdev1", 00:22:10.036 "uuid": "70060d8f-2e0e-4ffa-8736-e4d2411ecb4e", 00:22:10.036 "is_configured": true, 00:22:10.036 "data_offset": 256, 00:22:10.036 "data_size": 7936 00:22:10.036 }, 00:22:10.036 { 00:22:10.036 "name": "BaseBdev2", 00:22:10.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.036 "is_configured": false, 00:22:10.036 "data_offset": 0, 00:22:10.036 "data_size": 0 00:22:10.036 } 00:22:10.036 ] 00:22:10.036 }' 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.036 14:21:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.606 [2024-11-27 14:21:41.425645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:10.606 [2024-11-27 14:21:41.425943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:10.606 [2024-11-27 14:21:41.425964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:10.606 [2024-11-27 14:21:41.426059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:10.606 [2024-11-27 14:21:41.426231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:10.606 [2024-11-27 14:21:41.426255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:10.606 [2024-11-27 14:21:41.426380] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.606 BaseBdev2 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.606 [ 00:22:10.606 { 00:22:10.606 "name": "BaseBdev2", 00:22:10.606 "aliases": [ 00:22:10.606 "e3bf3fd5-4da7-41c4-9905-bb6ff99c15d7" 00:22:10.606 ], 00:22:10.606 "product_name": "Malloc disk", 00:22:10.606 "block_size": 4096, 00:22:10.606 "num_blocks": 8192, 00:22:10.606 "uuid": "e3bf3fd5-4da7-41c4-9905-bb6ff99c15d7", 00:22:10.606 "md_size": 32, 00:22:10.606 "md_interleave": false, 00:22:10.606 "dif_type": 0, 00:22:10.606 "assigned_rate_limits": { 00:22:10.606 "rw_ios_per_sec": 0, 00:22:10.606 "rw_mbytes_per_sec": 0, 00:22:10.606 "r_mbytes_per_sec": 0, 00:22:10.606 "w_mbytes_per_sec": 0 00:22:10.606 }, 00:22:10.606 "claimed": true, 00:22:10.606 "claim_type": "exclusive_write", 00:22:10.606 "zoned": false, 00:22:10.606 "supported_io_types": { 00:22:10.606 "read": true, 00:22:10.606 "write": true, 00:22:10.606 "unmap": true, 00:22:10.606 "flush": true, 00:22:10.606 "reset": true, 00:22:10.606 "nvme_admin": false, 00:22:10.606 "nvme_io": false, 00:22:10.606 "nvme_io_md": false, 00:22:10.606 "write_zeroes": true, 00:22:10.606 "zcopy": true, 00:22:10.606 "get_zone_info": false, 00:22:10.606 "zone_management": false, 00:22:10.606 "zone_append": false, 00:22:10.606 "compare": false, 00:22:10.606 "compare_and_write": false, 00:22:10.606 "abort": true, 00:22:10.606 "seek_hole": false, 00:22:10.606 "seek_data": false, 00:22:10.606 "copy": true, 00:22:10.606 "nvme_iov_md": false 00:22:10.606 }, 00:22:10.606 "memory_domains": [ 00:22:10.606 { 00:22:10.606 "dma_device_id": "system", 00:22:10.606 "dma_device_type": 1 00:22:10.606 }, 00:22:10.606 { 00:22:10.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.606 "dma_device_type": 2 00:22:10.606 } 00:22:10.606 ], 00:22:10.606 "driver_specific": {} 00:22:10.606 } 00:22:10.606 ] 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.606 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.606 "name": "Existed_Raid", 00:22:10.606 "uuid": "b4919f44-7532-418c-8952-1de70feecb05", 00:22:10.606 "strip_size_kb": 0, 00:22:10.606 "state": "online", 00:22:10.606 "raid_level": "raid1", 00:22:10.606 "superblock": true, 00:22:10.606 "num_base_bdevs": 2, 00:22:10.606 "num_base_bdevs_discovered": 2, 00:22:10.606 "num_base_bdevs_operational": 2, 00:22:10.606 "base_bdevs_list": [ 00:22:10.606 { 00:22:10.606 "name": "BaseBdev1", 00:22:10.606 "uuid": "70060d8f-2e0e-4ffa-8736-e4d2411ecb4e", 00:22:10.606 "is_configured": true, 00:22:10.606 "data_offset": 256, 00:22:10.606 "data_size": 7936 00:22:10.607 }, 00:22:10.607 { 00:22:10.607 "name": "BaseBdev2", 00:22:10.607 "uuid": "e3bf3fd5-4da7-41c4-9905-bb6ff99c15d7", 00:22:10.607 "is_configured": true, 00:22:10.607 "data_offset": 256, 00:22:10.607 "data_size": 7936 00:22:10.607 } 00:22:10.607 ] 00:22:10.607 }' 00:22:10.607 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.607 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.175 [2024-11-27 14:21:41.929224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.175 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:11.175 "name": "Existed_Raid", 00:22:11.175 "aliases": [ 00:22:11.175 "b4919f44-7532-418c-8952-1de70feecb05" 00:22:11.175 ], 00:22:11.175 "product_name": "Raid Volume", 00:22:11.175 "block_size": 4096, 00:22:11.175 "num_blocks": 7936, 00:22:11.175 "uuid": "b4919f44-7532-418c-8952-1de70feecb05", 00:22:11.175 "md_size": 32, 00:22:11.175 "md_interleave": false, 00:22:11.175 "dif_type": 0, 00:22:11.175 "assigned_rate_limits": { 00:22:11.175 "rw_ios_per_sec": 0, 00:22:11.175 "rw_mbytes_per_sec": 0, 00:22:11.175 "r_mbytes_per_sec": 0, 00:22:11.175 "w_mbytes_per_sec": 0 00:22:11.175 }, 00:22:11.175 "claimed": false, 00:22:11.175 "zoned": false, 00:22:11.175 "supported_io_types": { 00:22:11.175 "read": true, 00:22:11.175 "write": true, 00:22:11.175 "unmap": false, 00:22:11.175 "flush": false, 00:22:11.175 "reset": true, 00:22:11.175 "nvme_admin": false, 00:22:11.175 "nvme_io": false, 00:22:11.175 "nvme_io_md": false, 00:22:11.175 "write_zeroes": true, 00:22:11.175 "zcopy": false, 00:22:11.175 "get_zone_info": false, 00:22:11.175 "zone_management": false, 00:22:11.175 "zone_append": false, 00:22:11.175 "compare": false, 00:22:11.175 "compare_and_write": false, 00:22:11.175 "abort": false, 00:22:11.175 "seek_hole": false, 00:22:11.175 "seek_data": false, 00:22:11.175 "copy": false, 00:22:11.175 "nvme_iov_md": false 00:22:11.175 }, 00:22:11.175 "memory_domains": [ 00:22:11.175 { 00:22:11.175 "dma_device_id": "system", 00:22:11.175 "dma_device_type": 1 00:22:11.175 }, 00:22:11.175 { 00:22:11.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.175 "dma_device_type": 2 00:22:11.175 }, 00:22:11.175 { 00:22:11.175 "dma_device_id": "system", 00:22:11.175 "dma_device_type": 1 00:22:11.175 }, 00:22:11.175 { 00:22:11.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.175 "dma_device_type": 2 00:22:11.175 } 00:22:11.175 ], 00:22:11.175 "driver_specific": { 00:22:11.175 "raid": { 00:22:11.175 "uuid": "b4919f44-7532-418c-8952-1de70feecb05", 00:22:11.175 "strip_size_kb": 0, 00:22:11.175 "state": "online", 00:22:11.175 "raid_level": "raid1", 00:22:11.175 "superblock": true, 00:22:11.175 "num_base_bdevs": 2, 00:22:11.175 "num_base_bdevs_discovered": 2, 00:22:11.175 "num_base_bdevs_operational": 2, 00:22:11.175 "base_bdevs_list": [ 00:22:11.175 { 00:22:11.175 "name": "BaseBdev1", 00:22:11.175 "uuid": "70060d8f-2e0e-4ffa-8736-e4d2411ecb4e", 00:22:11.175 "is_configured": true, 00:22:11.175 "data_offset": 256, 00:22:11.175 "data_size": 7936 00:22:11.176 }, 00:22:11.176 { 00:22:11.176 "name": "BaseBdev2", 00:22:11.176 "uuid": "e3bf3fd5-4da7-41c4-9905-bb6ff99c15d7", 00:22:11.176 "is_configured": true, 00:22:11.176 "data_offset": 256, 00:22:11.176 "data_size": 7936 00:22:11.176 } 00:22:11.176 ] 00:22:11.176 } 00:22:11.176 } 00:22:11.176 }' 00:22:11.176 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:11.176 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:11.176 BaseBdev2' 00:22:11.176 14:21:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:11.176 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.436 [2024-11-27 14:21:42.132595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.436 "name": "Existed_Raid", 00:22:11.436 "uuid": "b4919f44-7532-418c-8952-1de70feecb05", 00:22:11.436 "strip_size_kb": 0, 00:22:11.436 "state": "online", 00:22:11.436 "raid_level": "raid1", 00:22:11.436 "superblock": true, 00:22:11.436 "num_base_bdevs": 2, 00:22:11.436 "num_base_bdevs_discovered": 1, 00:22:11.436 "num_base_bdevs_operational": 1, 00:22:11.436 "base_bdevs_list": [ 00:22:11.436 { 00:22:11.436 "name": null, 00:22:11.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.436 "is_configured": false, 00:22:11.436 "data_offset": 0, 00:22:11.436 "data_size": 7936 00:22:11.436 }, 00:22:11.436 { 00:22:11.436 "name": "BaseBdev2", 00:22:11.436 "uuid": "e3bf3fd5-4da7-41c4-9905-bb6ff99c15d7", 00:22:11.436 "is_configured": true, 00:22:11.436 "data_offset": 256, 00:22:11.436 "data_size": 7936 00:22:11.436 } 00:22:11.436 ] 00:22:11.436 }' 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.436 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.005 [2024-11-27 14:21:42.767542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:12.005 [2024-11-27 14:21:42.767668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:12.005 [2024-11-27 14:21:42.876385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.005 [2024-11-27 14:21:42.876446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:12.005 [2024-11-27 14:21:42.876460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87446 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87446 ']' 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87446 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.005 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87446 00:22:12.266 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.266 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.266 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87446' 00:22:12.266 killing process with pid 87446 00:22:12.266 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87446 00:22:12.266 [2024-11-27 14:21:42.974440] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:12.266 14:21:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87446 00:22:12.266 [2024-11-27 14:21:42.992898] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:13.662 14:21:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:22:13.662 00:22:13.662 real 0m5.238s 00:22:13.662 user 0m7.496s 00:22:13.662 sys 0m0.909s 00:22:13.662 14:21:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.662 14:21:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.662 ************************************ 00:22:13.662 END TEST raid_state_function_test_sb_md_separate 00:22:13.662 ************************************ 00:22:13.662 14:21:44 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:22:13.662 14:21:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:13.662 14:21:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.662 14:21:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:13.662 ************************************ 00:22:13.662 START TEST raid_superblock_test_md_separate 00:22:13.662 ************************************ 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87702 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87702 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87702 ']' 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.662 14:21:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.662 [2024-11-27 14:21:44.333413] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:13.662 [2024-11-27 14:21:44.333549] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87702 ] 00:22:13.662 [2024-11-27 14:21:44.505909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.921 [2024-11-27 14:21:44.620996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.921 [2024-11-27 14:21:44.807113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:13.921 [2024-11-27 14:21:44.807162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.491 malloc1 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.491 [2024-11-27 14:21:45.247416] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:14.491 [2024-11-27 14:21:45.247488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.491 [2024-11-27 14:21:45.247512] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:14.491 [2024-11-27 14:21:45.247522] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.491 [2024-11-27 14:21:45.249614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.491 [2024-11-27 14:21:45.249669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:14.491 pt1 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.491 malloc2 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.491 [2024-11-27 14:21:45.308555] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:14.491 [2024-11-27 14:21:45.308630] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.491 [2024-11-27 14:21:45.308657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:14.491 [2024-11-27 14:21:45.308667] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.491 [2024-11-27 14:21:45.310885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.491 [2024-11-27 14:21:45.310929] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:14.491 pt2 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.491 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.492 [2024-11-27 14:21:45.320563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:14.492 [2024-11-27 14:21:45.322437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:14.492 [2024-11-27 14:21:45.322638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:14.492 [2024-11-27 14:21:45.322653] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:14.492 [2024-11-27 14:21:45.322748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:14.492 [2024-11-27 14:21:45.322878] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:14.492 [2024-11-27 14:21:45.322899] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:14.492 [2024-11-27 14:21:45.323032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.492 "name": "raid_bdev1", 00:22:14.492 "uuid": "7582158e-3350-41b7-914c-142bd4cfa04c", 00:22:14.492 "strip_size_kb": 0, 00:22:14.492 "state": "online", 00:22:14.492 "raid_level": "raid1", 00:22:14.492 "superblock": true, 00:22:14.492 "num_base_bdevs": 2, 00:22:14.492 "num_base_bdevs_discovered": 2, 00:22:14.492 "num_base_bdevs_operational": 2, 00:22:14.492 "base_bdevs_list": [ 00:22:14.492 { 00:22:14.492 "name": "pt1", 00:22:14.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.492 "is_configured": true, 00:22:14.492 "data_offset": 256, 00:22:14.492 "data_size": 7936 00:22:14.492 }, 00:22:14.492 { 00:22:14.492 "name": "pt2", 00:22:14.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.492 "is_configured": true, 00:22:14.492 "data_offset": 256, 00:22:14.492 "data_size": 7936 00:22:14.492 } 00:22:14.492 ] 00:22:14.492 }' 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.492 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.062 [2024-11-27 14:21:45.807969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:15.062 "name": "raid_bdev1", 00:22:15.062 "aliases": [ 00:22:15.062 "7582158e-3350-41b7-914c-142bd4cfa04c" 00:22:15.062 ], 00:22:15.062 "product_name": "Raid Volume", 00:22:15.062 "block_size": 4096, 00:22:15.062 "num_blocks": 7936, 00:22:15.062 "uuid": "7582158e-3350-41b7-914c-142bd4cfa04c", 00:22:15.062 "md_size": 32, 00:22:15.062 "md_interleave": false, 00:22:15.062 "dif_type": 0, 00:22:15.062 "assigned_rate_limits": { 00:22:15.062 "rw_ios_per_sec": 0, 00:22:15.062 "rw_mbytes_per_sec": 0, 00:22:15.062 "r_mbytes_per_sec": 0, 00:22:15.062 "w_mbytes_per_sec": 0 00:22:15.062 }, 00:22:15.062 "claimed": false, 00:22:15.062 "zoned": false, 00:22:15.062 "supported_io_types": { 00:22:15.062 "read": true, 00:22:15.062 "write": true, 00:22:15.062 "unmap": false, 00:22:15.062 "flush": false, 00:22:15.062 "reset": true, 00:22:15.062 "nvme_admin": false, 00:22:15.062 "nvme_io": false, 00:22:15.062 "nvme_io_md": false, 00:22:15.062 "write_zeroes": true, 00:22:15.062 "zcopy": false, 00:22:15.062 "get_zone_info": false, 00:22:15.062 "zone_management": false, 00:22:15.062 "zone_append": false, 00:22:15.062 "compare": false, 00:22:15.062 "compare_and_write": false, 00:22:15.062 "abort": false, 00:22:15.062 "seek_hole": false, 00:22:15.062 "seek_data": false, 00:22:15.062 "copy": false, 00:22:15.062 "nvme_iov_md": false 00:22:15.062 }, 00:22:15.062 "memory_domains": [ 00:22:15.062 { 00:22:15.062 "dma_device_id": "system", 00:22:15.062 "dma_device_type": 1 00:22:15.062 }, 00:22:15.062 { 00:22:15.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.062 "dma_device_type": 2 00:22:15.062 }, 00:22:15.062 { 00:22:15.062 "dma_device_id": "system", 00:22:15.062 "dma_device_type": 1 00:22:15.062 }, 00:22:15.062 { 00:22:15.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.062 "dma_device_type": 2 00:22:15.062 } 00:22:15.062 ], 00:22:15.062 "driver_specific": { 00:22:15.062 "raid": { 00:22:15.062 "uuid": "7582158e-3350-41b7-914c-142bd4cfa04c", 00:22:15.062 "strip_size_kb": 0, 00:22:15.062 "state": "online", 00:22:15.062 "raid_level": "raid1", 00:22:15.062 "superblock": true, 00:22:15.062 "num_base_bdevs": 2, 00:22:15.062 "num_base_bdevs_discovered": 2, 00:22:15.062 "num_base_bdevs_operational": 2, 00:22:15.062 "base_bdevs_list": [ 00:22:15.062 { 00:22:15.062 "name": "pt1", 00:22:15.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:15.062 "is_configured": true, 00:22:15.062 "data_offset": 256, 00:22:15.062 "data_size": 7936 00:22:15.062 }, 00:22:15.062 { 00:22:15.062 "name": "pt2", 00:22:15.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.062 "is_configured": true, 00:22:15.062 "data_offset": 256, 00:22:15.062 "data_size": 7936 00:22:15.062 } 00:22:15.062 ] 00:22:15.062 } 00:22:15.062 } 00:22:15.062 }' 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:15.062 pt2' 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:15.062 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.063 14:21:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.323 [2024-11-27 14:21:46.055540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7582158e-3350-41b7-914c-142bd4cfa04c 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 7582158e-3350-41b7-914c-142bd4cfa04c ']' 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.323 [2024-11-27 14:21:46.083192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.323 [2024-11-27 14:21:46.083219] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.323 [2024-11-27 14:21:46.083306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.323 [2024-11-27 14:21:46.083364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.323 [2024-11-27 14:21:46.083376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:15.323 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.324 [2024-11-27 14:21:46.202999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:15.324 [2024-11-27 14:21:46.205004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:15.324 [2024-11-27 14:21:46.205093] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:15.324 [2024-11-27 14:21:46.205174] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:15.324 [2024-11-27 14:21:46.205190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.324 [2024-11-27 14:21:46.205201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:15.324 request: 00:22:15.324 { 00:22:15.324 "name": "raid_bdev1", 00:22:15.324 "raid_level": "raid1", 00:22:15.324 "base_bdevs": [ 00:22:15.324 "malloc1", 00:22:15.324 "malloc2" 00:22:15.324 ], 00:22:15.324 "superblock": false, 00:22:15.324 "method": "bdev_raid_create", 00:22:15.324 "req_id": 1 00:22:15.324 } 00:22:15.324 Got JSON-RPC error response 00:22:15.324 response: 00:22:15.324 { 00:22:15.324 "code": -17, 00:22:15.324 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:15.324 } 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.324 [2024-11-27 14:21:46.266907] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:15.324 [2024-11-27 14:21:46.266983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.324 [2024-11-27 14:21:46.267002] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:15.324 [2024-11-27 14:21:46.267013] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.324 [2024-11-27 14:21:46.269035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.324 [2024-11-27 14:21:46.269081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:15.324 [2024-11-27 14:21:46.269153] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:15.324 [2024-11-27 14:21:46.269222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:15.324 pt1 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.324 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.584 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.584 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.584 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.584 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.584 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.584 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.584 "name": "raid_bdev1", 00:22:15.584 "uuid": "7582158e-3350-41b7-914c-142bd4cfa04c", 00:22:15.584 "strip_size_kb": 0, 00:22:15.584 "state": "configuring", 00:22:15.584 "raid_level": "raid1", 00:22:15.584 "superblock": true, 00:22:15.584 "num_base_bdevs": 2, 00:22:15.584 "num_base_bdevs_discovered": 1, 00:22:15.584 "num_base_bdevs_operational": 2, 00:22:15.584 "base_bdevs_list": [ 00:22:15.584 { 00:22:15.584 "name": "pt1", 00:22:15.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:15.584 "is_configured": true, 00:22:15.584 "data_offset": 256, 00:22:15.584 "data_size": 7936 00:22:15.584 }, 00:22:15.584 { 00:22:15.584 "name": null, 00:22:15.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.585 "is_configured": false, 00:22:15.585 "data_offset": 256, 00:22:15.585 "data_size": 7936 00:22:15.585 } 00:22:15.585 ] 00:22:15.585 }' 00:22:15.585 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.585 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.844 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:15.844 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.845 [2024-11-27 14:21:46.710106] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:15.845 [2024-11-27 14:21:46.710195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.845 [2024-11-27 14:21:46.710216] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:15.845 [2024-11-27 14:21:46.710228] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.845 [2024-11-27 14:21:46.710461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.845 [2024-11-27 14:21:46.710480] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:15.845 [2024-11-27 14:21:46.710533] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:15.845 [2024-11-27 14:21:46.710558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:15.845 [2024-11-27 14:21:46.710675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:15.845 [2024-11-27 14:21:46.710693] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:15.845 [2024-11-27 14:21:46.710770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:15.845 [2024-11-27 14:21:46.710894] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:15.845 [2024-11-27 14:21:46.710910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:15.845 [2024-11-27 14:21:46.711008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.845 pt2 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.845 "name": "raid_bdev1", 00:22:15.845 "uuid": "7582158e-3350-41b7-914c-142bd4cfa04c", 00:22:15.845 "strip_size_kb": 0, 00:22:15.845 "state": "online", 00:22:15.845 "raid_level": "raid1", 00:22:15.845 "superblock": true, 00:22:15.845 "num_base_bdevs": 2, 00:22:15.845 "num_base_bdevs_discovered": 2, 00:22:15.845 "num_base_bdevs_operational": 2, 00:22:15.845 "base_bdevs_list": [ 00:22:15.845 { 00:22:15.845 "name": "pt1", 00:22:15.845 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:15.845 "is_configured": true, 00:22:15.845 "data_offset": 256, 00:22:15.845 "data_size": 7936 00:22:15.845 }, 00:22:15.845 { 00:22:15.845 "name": "pt2", 00:22:15.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.845 "is_configured": true, 00:22:15.845 "data_offset": 256, 00:22:15.845 "data_size": 7936 00:22:15.845 } 00:22:15.845 ] 00:22:15.845 }' 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.845 14:21:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:16.415 [2024-11-27 14:21:47.137641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.415 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:16.415 "name": "raid_bdev1", 00:22:16.415 "aliases": [ 00:22:16.415 "7582158e-3350-41b7-914c-142bd4cfa04c" 00:22:16.415 ], 00:22:16.415 "product_name": "Raid Volume", 00:22:16.415 "block_size": 4096, 00:22:16.415 "num_blocks": 7936, 00:22:16.415 "uuid": "7582158e-3350-41b7-914c-142bd4cfa04c", 00:22:16.415 "md_size": 32, 00:22:16.415 "md_interleave": false, 00:22:16.415 "dif_type": 0, 00:22:16.416 "assigned_rate_limits": { 00:22:16.416 "rw_ios_per_sec": 0, 00:22:16.416 "rw_mbytes_per_sec": 0, 00:22:16.416 "r_mbytes_per_sec": 0, 00:22:16.416 "w_mbytes_per_sec": 0 00:22:16.416 }, 00:22:16.416 "claimed": false, 00:22:16.416 "zoned": false, 00:22:16.416 "supported_io_types": { 00:22:16.416 "read": true, 00:22:16.416 "write": true, 00:22:16.416 "unmap": false, 00:22:16.416 "flush": false, 00:22:16.416 "reset": true, 00:22:16.416 "nvme_admin": false, 00:22:16.416 "nvme_io": false, 00:22:16.416 "nvme_io_md": false, 00:22:16.416 "write_zeroes": true, 00:22:16.416 "zcopy": false, 00:22:16.416 "get_zone_info": false, 00:22:16.416 "zone_management": false, 00:22:16.416 "zone_append": false, 00:22:16.416 "compare": false, 00:22:16.416 "compare_and_write": false, 00:22:16.416 "abort": false, 00:22:16.416 "seek_hole": false, 00:22:16.416 "seek_data": false, 00:22:16.416 "copy": false, 00:22:16.416 "nvme_iov_md": false 00:22:16.416 }, 00:22:16.416 "memory_domains": [ 00:22:16.416 { 00:22:16.416 "dma_device_id": "system", 00:22:16.416 "dma_device_type": 1 00:22:16.416 }, 00:22:16.416 { 00:22:16.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.416 "dma_device_type": 2 00:22:16.416 }, 00:22:16.416 { 00:22:16.416 "dma_device_id": "system", 00:22:16.416 "dma_device_type": 1 00:22:16.416 }, 00:22:16.416 { 00:22:16.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.416 "dma_device_type": 2 00:22:16.416 } 00:22:16.416 ], 00:22:16.416 "driver_specific": { 00:22:16.416 "raid": { 00:22:16.416 "uuid": "7582158e-3350-41b7-914c-142bd4cfa04c", 00:22:16.416 "strip_size_kb": 0, 00:22:16.416 "state": "online", 00:22:16.416 "raid_level": "raid1", 00:22:16.416 "superblock": true, 00:22:16.416 "num_base_bdevs": 2, 00:22:16.416 "num_base_bdevs_discovered": 2, 00:22:16.416 "num_base_bdevs_operational": 2, 00:22:16.416 "base_bdevs_list": [ 00:22:16.416 { 00:22:16.416 "name": "pt1", 00:22:16.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:16.416 "is_configured": true, 00:22:16.416 "data_offset": 256, 00:22:16.416 "data_size": 7936 00:22:16.416 }, 00:22:16.416 { 00:22:16.416 "name": "pt2", 00:22:16.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.416 "is_configured": true, 00:22:16.416 "data_offset": 256, 00:22:16.416 "data_size": 7936 00:22:16.416 } 00:22:16.416 ] 00:22:16.416 } 00:22:16.416 } 00:22:16.416 }' 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:16.416 pt2' 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:16.416 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:16.676 [2024-11-27 14:21:47.389256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 7582158e-3350-41b7-914c-142bd4cfa04c '!=' 7582158e-3350-41b7-914c-142bd4cfa04c ']' 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.676 [2024-11-27 14:21:47.416971] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.676 "name": "raid_bdev1", 00:22:16.676 "uuid": "7582158e-3350-41b7-914c-142bd4cfa04c", 00:22:16.676 "strip_size_kb": 0, 00:22:16.676 "state": "online", 00:22:16.676 "raid_level": "raid1", 00:22:16.676 "superblock": true, 00:22:16.676 "num_base_bdevs": 2, 00:22:16.676 "num_base_bdevs_discovered": 1, 00:22:16.676 "num_base_bdevs_operational": 1, 00:22:16.676 "base_bdevs_list": [ 00:22:16.676 { 00:22:16.676 "name": null, 00:22:16.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.676 "is_configured": false, 00:22:16.676 "data_offset": 0, 00:22:16.676 "data_size": 7936 00:22:16.676 }, 00:22:16.676 { 00:22:16.676 "name": "pt2", 00:22:16.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.676 "is_configured": true, 00:22:16.676 "data_offset": 256, 00:22:16.676 "data_size": 7936 00:22:16.676 } 00:22:16.676 ] 00:22:16.676 }' 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.676 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.935 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:16.935 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.935 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.935 [2024-11-27 14:21:47.860165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:16.935 [2024-11-27 14:21:47.860198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.935 [2024-11-27 14:21:47.860279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.935 [2024-11-27 14:21:47.860329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.935 [2024-11-27 14:21:47.860341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:16.935 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.935 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.935 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.935 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.935 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:16.935 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.195 [2024-11-27 14:21:47.924060] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:17.195 [2024-11-27 14:21:47.924145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.195 [2024-11-27 14:21:47.924163] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:17.195 [2024-11-27 14:21:47.924175] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.195 [2024-11-27 14:21:47.926178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.195 [2024-11-27 14:21:47.926228] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:17.195 [2024-11-27 14:21:47.926283] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:17.195 [2024-11-27 14:21:47.926329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:17.195 [2024-11-27 14:21:47.926424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:17.195 [2024-11-27 14:21:47.926443] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:17.195 [2024-11-27 14:21:47.926521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:17.195 [2024-11-27 14:21:47.926645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:17.195 [2024-11-27 14:21:47.926659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:17.195 [2024-11-27 14:21:47.926765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.195 pt2 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.195 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.196 "name": "raid_bdev1", 00:22:17.196 "uuid": "7582158e-3350-41b7-914c-142bd4cfa04c", 00:22:17.196 "strip_size_kb": 0, 00:22:17.196 "state": "online", 00:22:17.196 "raid_level": "raid1", 00:22:17.196 "superblock": true, 00:22:17.196 "num_base_bdevs": 2, 00:22:17.196 "num_base_bdevs_discovered": 1, 00:22:17.196 "num_base_bdevs_operational": 1, 00:22:17.196 "base_bdevs_list": [ 00:22:17.196 { 00:22:17.196 "name": null, 00:22:17.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.196 "is_configured": false, 00:22:17.196 "data_offset": 256, 00:22:17.196 "data_size": 7936 00:22:17.196 }, 00:22:17.196 { 00:22:17.196 "name": "pt2", 00:22:17.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.196 "is_configured": true, 00:22:17.196 "data_offset": 256, 00:22:17.196 "data_size": 7936 00:22:17.196 } 00:22:17.196 ] 00:22:17.196 }' 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.196 14:21:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.455 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:17.455 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.455 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.455 [2024-11-27 14:21:48.343382] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.456 [2024-11-27 14:21:48.343422] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.456 [2024-11-27 14:21:48.343509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.456 [2024-11-27 14:21:48.343567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.456 [2024-11-27 14:21:48.343577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.456 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.456 [2024-11-27 14:21:48.403314] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:17.456 [2024-11-27 14:21:48.403383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.456 [2024-11-27 14:21:48.403404] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:17.456 [2024-11-27 14:21:48.403413] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.456 [2024-11-27 14:21:48.405591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.456 [2024-11-27 14:21:48.405633] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:17.456 [2024-11-27 14:21:48.405719] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:17.456 [2024-11-27 14:21:48.405768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.456 [2024-11-27 14:21:48.405922] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:17.456 [2024-11-27 14:21:48.405942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.456 [2024-11-27 14:21:48.405963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:17.456 [2024-11-27 14:21:48.406058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:17.456 [2024-11-27 14:21:48.406169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:17.456 [2024-11-27 14:21:48.406184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:17.456 [2024-11-27 14:21:48.406253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:17.456 [2024-11-27 14:21:48.406368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:17.456 [2024-11-27 14:21:48.406387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:17.456 [2024-11-27 14:21:48.406514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.715 pt1 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.715 "name": "raid_bdev1", 00:22:17.715 "uuid": "7582158e-3350-41b7-914c-142bd4cfa04c", 00:22:17.715 "strip_size_kb": 0, 00:22:17.715 "state": "online", 00:22:17.715 "raid_level": "raid1", 00:22:17.715 "superblock": true, 00:22:17.715 "num_base_bdevs": 2, 00:22:17.715 "num_base_bdevs_discovered": 1, 00:22:17.715 "num_base_bdevs_operational": 1, 00:22:17.715 "base_bdevs_list": [ 00:22:17.715 { 00:22:17.715 "name": null, 00:22:17.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.715 "is_configured": false, 00:22:17.715 "data_offset": 256, 00:22:17.715 "data_size": 7936 00:22:17.715 }, 00:22:17.715 { 00:22:17.715 "name": "pt2", 00:22:17.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.715 "is_configured": true, 00:22:17.715 "data_offset": 256, 00:22:17.715 "data_size": 7936 00:22:17.715 } 00:22:17.715 ] 00:22:17.715 }' 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.715 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.973 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:17.973 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.973 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.973 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:17.973 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.973 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:17.973 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.973 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:17.973 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.973 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.973 [2024-11-27 14:21:48.918722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 7582158e-3350-41b7-914c-142bd4cfa04c '!=' 7582158e-3350-41b7-914c-142bd4cfa04c ']' 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87702 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87702 ']' 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87702 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87702 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87702' 00:22:18.233 killing process with pid 87702 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87702 00:22:18.233 [2024-11-27 14:21:48.996669] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:18.233 [2024-11-27 14:21:48.996855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:18.233 14:21:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87702 00:22:18.233 [2024-11-27 14:21:48.996942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:18.233 [2024-11-27 14:21:48.997002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:18.491 [2024-11-27 14:21:49.225819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:19.427 14:21:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:22:19.427 00:22:19.427 real 0m6.130s 00:22:19.427 user 0m9.234s 00:22:19.427 sys 0m1.104s 00:22:19.427 14:21:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.427 ************************************ 00:22:19.427 END TEST raid_superblock_test_md_separate 00:22:19.427 ************************************ 00:22:19.427 14:21:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.694 14:21:50 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:22:19.694 14:21:50 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:22:19.694 14:21:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:19.694 14:21:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.694 14:21:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:19.694 ************************************ 00:22:19.694 START TEST raid_rebuild_test_sb_md_separate 00:22:19.694 ************************************ 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88027 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88027 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88027 ']' 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:19.694 14:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.694 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:19.694 Zero copy mechanism will not be used. 00:22:19.694 [2024-11-27 14:21:50.534986] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:19.694 [2024-11-27 14:21:50.535109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88027 ] 00:22:19.969 [2024-11-27 14:21:50.708268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.969 [2024-11-27 14:21:50.829800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.228 [2024-11-27 14:21:51.035449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.228 [2024-11-27 14:21:51.035588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.488 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.488 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:20.488 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:20.488 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:22:20.488 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.488 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.747 BaseBdev1_malloc 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.747 [2024-11-27 14:21:51.475504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:20.747 [2024-11-27 14:21:51.475640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.747 [2024-11-27 14:21:51.475669] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:20.747 [2024-11-27 14:21:51.475680] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.747 [2024-11-27 14:21:51.477798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.747 [2024-11-27 14:21:51.477840] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:20.747 BaseBdev1 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.747 BaseBdev2_malloc 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.747 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.747 [2024-11-27 14:21:51.533179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:20.748 [2024-11-27 14:21:51.533311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.748 [2024-11-27 14:21:51.533337] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:20.748 [2024-11-27 14:21:51.533351] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.748 [2024-11-27 14:21:51.535337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.748 [2024-11-27 14:21:51.535378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:20.748 BaseBdev2 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.748 spare_malloc 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.748 spare_delay 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.748 [2024-11-27 14:21:51.612147] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:20.748 [2024-11-27 14:21:51.612244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.748 [2024-11-27 14:21:51.612273] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:20.748 [2024-11-27 14:21:51.612285] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.748 [2024-11-27 14:21:51.614363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.748 [2024-11-27 14:21:51.614403] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:20.748 spare 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.748 [2024-11-27 14:21:51.624195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:20.748 [2024-11-27 14:21:51.626096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:20.748 [2024-11-27 14:21:51.626309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:20.748 [2024-11-27 14:21:51.626326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:20.748 [2024-11-27 14:21:51.626430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:20.748 [2024-11-27 14:21:51.626558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:20.748 [2024-11-27 14:21:51.626569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:20.748 [2024-11-27 14:21:51.626700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.748 "name": "raid_bdev1", 00:22:20.748 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:20.748 "strip_size_kb": 0, 00:22:20.748 "state": "online", 00:22:20.748 "raid_level": "raid1", 00:22:20.748 "superblock": true, 00:22:20.748 "num_base_bdevs": 2, 00:22:20.748 "num_base_bdevs_discovered": 2, 00:22:20.748 "num_base_bdevs_operational": 2, 00:22:20.748 "base_bdevs_list": [ 00:22:20.748 { 00:22:20.748 "name": "BaseBdev1", 00:22:20.748 "uuid": "cb1caebc-f5aa-5335-be55-a30b99dcd28a", 00:22:20.748 "is_configured": true, 00:22:20.748 "data_offset": 256, 00:22:20.748 "data_size": 7936 00:22:20.748 }, 00:22:20.748 { 00:22:20.748 "name": "BaseBdev2", 00:22:20.748 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:20.748 "is_configured": true, 00:22:20.748 "data_offset": 256, 00:22:20.748 "data_size": 7936 00:22:20.748 } 00:22:20.748 ] 00:22:20.748 }' 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.748 14:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.316 [2024-11-27 14:21:52.087673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:21.316 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:21.317 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:21.575 [2024-11-27 14:21:52.386933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:21.575 /dev/nbd0 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:21.575 1+0 records in 00:22:21.575 1+0 records out 00:22:21.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053415 s, 7.7 MB/s 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:21.575 14:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:22.516 7936+0 records in 00:22:22.516 7936+0 records out 00:22:22.516 32505856 bytes (33 MB, 31 MiB) copied, 0.674907 s, 48.2 MB/s 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:22.516 [2024-11-27 14:21:53.347025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:22.516 [2024-11-27 14:21:53.363126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.516 "name": "raid_bdev1", 00:22:22.516 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:22.516 "strip_size_kb": 0, 00:22:22.516 "state": "online", 00:22:22.516 "raid_level": "raid1", 00:22:22.516 "superblock": true, 00:22:22.516 "num_base_bdevs": 2, 00:22:22.516 "num_base_bdevs_discovered": 1, 00:22:22.516 "num_base_bdevs_operational": 1, 00:22:22.516 "base_bdevs_list": [ 00:22:22.516 { 00:22:22.516 "name": null, 00:22:22.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.516 "is_configured": false, 00:22:22.516 "data_offset": 0, 00:22:22.516 "data_size": 7936 00:22:22.516 }, 00:22:22.516 { 00:22:22.516 "name": "BaseBdev2", 00:22:22.516 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:22.516 "is_configured": true, 00:22:22.516 "data_offset": 256, 00:22:22.516 "data_size": 7936 00:22:22.516 } 00:22:22.516 ] 00:22:22.516 }' 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.516 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:23.086 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:23.086 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.086 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:23.086 [2024-11-27 14:21:53.830321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:23.086 [2024-11-27 14:21:53.845230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:22:23.086 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.086 14:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:23.086 [2024-11-27 14:21:53.847239] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:24.028 "name": "raid_bdev1", 00:22:24.028 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:24.028 "strip_size_kb": 0, 00:22:24.028 "state": "online", 00:22:24.028 "raid_level": "raid1", 00:22:24.028 "superblock": true, 00:22:24.028 "num_base_bdevs": 2, 00:22:24.028 "num_base_bdevs_discovered": 2, 00:22:24.028 "num_base_bdevs_operational": 2, 00:22:24.028 "process": { 00:22:24.028 "type": "rebuild", 00:22:24.028 "target": "spare", 00:22:24.028 "progress": { 00:22:24.028 "blocks": 2560, 00:22:24.028 "percent": 32 00:22:24.028 } 00:22:24.028 }, 00:22:24.028 "base_bdevs_list": [ 00:22:24.028 { 00:22:24.028 "name": "spare", 00:22:24.028 "uuid": "cf81c7d1-c669-5a74-bf20-47046beffbbe", 00:22:24.028 "is_configured": true, 00:22:24.028 "data_offset": 256, 00:22:24.028 "data_size": 7936 00:22:24.028 }, 00:22:24.028 { 00:22:24.028 "name": "BaseBdev2", 00:22:24.028 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:24.028 "is_configured": true, 00:22:24.028 "data_offset": 256, 00:22:24.028 "data_size": 7936 00:22:24.028 } 00:22:24.028 ] 00:22:24.028 }' 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.028 14:21:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.289 [2024-11-27 14:21:54.983067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.289 [2024-11-27 14:21:55.053392] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:24.289 [2024-11-27 14:21:55.053490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.289 [2024-11-27 14:21:55.053506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.289 [2024-11-27 14:21:55.053519] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.289 "name": "raid_bdev1", 00:22:24.289 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:24.289 "strip_size_kb": 0, 00:22:24.289 "state": "online", 00:22:24.289 "raid_level": "raid1", 00:22:24.289 "superblock": true, 00:22:24.289 "num_base_bdevs": 2, 00:22:24.289 "num_base_bdevs_discovered": 1, 00:22:24.289 "num_base_bdevs_operational": 1, 00:22:24.289 "base_bdevs_list": [ 00:22:24.289 { 00:22:24.289 "name": null, 00:22:24.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.289 "is_configured": false, 00:22:24.289 "data_offset": 0, 00:22:24.289 "data_size": 7936 00:22:24.289 }, 00:22:24.289 { 00:22:24.289 "name": "BaseBdev2", 00:22:24.289 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:24.289 "is_configured": true, 00:22:24.289 "data_offset": 256, 00:22:24.289 "data_size": 7936 00:22:24.289 } 00:22:24.289 ] 00:22:24.289 }' 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.289 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.549 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:24.549 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:24.549 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:24.549 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:24.549 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:24.549 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.549 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.549 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.549 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.809 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.809 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:24.809 "name": "raid_bdev1", 00:22:24.809 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:24.809 "strip_size_kb": 0, 00:22:24.809 "state": "online", 00:22:24.809 "raid_level": "raid1", 00:22:24.809 "superblock": true, 00:22:24.809 "num_base_bdevs": 2, 00:22:24.809 "num_base_bdevs_discovered": 1, 00:22:24.809 "num_base_bdevs_operational": 1, 00:22:24.809 "base_bdevs_list": [ 00:22:24.809 { 00:22:24.809 "name": null, 00:22:24.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.809 "is_configured": false, 00:22:24.809 "data_offset": 0, 00:22:24.809 "data_size": 7936 00:22:24.809 }, 00:22:24.809 { 00:22:24.809 "name": "BaseBdev2", 00:22:24.809 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:24.809 "is_configured": true, 00:22:24.809 "data_offset": 256, 00:22:24.809 "data_size": 7936 00:22:24.809 } 00:22:24.809 ] 00:22:24.809 }' 00:22:24.809 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:24.809 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:24.809 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:24.809 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:24.809 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:24.809 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.809 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.809 [2024-11-27 14:21:55.637337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:24.809 [2024-11-27 14:21:55.654309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:22:24.809 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.809 14:21:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:24.809 [2024-11-27 14:21:55.656520] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:25.745 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.745 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:25.745 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:25.745 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:25.745 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:25.745 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.745 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.745 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.745 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:25.745 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.004 "name": "raid_bdev1", 00:22:26.004 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:26.004 "strip_size_kb": 0, 00:22:26.004 "state": "online", 00:22:26.004 "raid_level": "raid1", 00:22:26.004 "superblock": true, 00:22:26.004 "num_base_bdevs": 2, 00:22:26.004 "num_base_bdevs_discovered": 2, 00:22:26.004 "num_base_bdevs_operational": 2, 00:22:26.004 "process": { 00:22:26.004 "type": "rebuild", 00:22:26.004 "target": "spare", 00:22:26.004 "progress": { 00:22:26.004 "blocks": 2560, 00:22:26.004 "percent": 32 00:22:26.004 } 00:22:26.004 }, 00:22:26.004 "base_bdevs_list": [ 00:22:26.004 { 00:22:26.004 "name": "spare", 00:22:26.004 "uuid": "cf81c7d1-c669-5a74-bf20-47046beffbbe", 00:22:26.004 "is_configured": true, 00:22:26.004 "data_offset": 256, 00:22:26.004 "data_size": 7936 00:22:26.004 }, 00:22:26.004 { 00:22:26.004 "name": "BaseBdev2", 00:22:26.004 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:26.004 "is_configured": true, 00:22:26.004 "data_offset": 256, 00:22:26.004 "data_size": 7936 00:22:26.004 } 00:22:26.004 ] 00:22:26.004 }' 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:26.004 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=724 00:22:26.004 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.005 "name": "raid_bdev1", 00:22:26.005 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:26.005 "strip_size_kb": 0, 00:22:26.005 "state": "online", 00:22:26.005 "raid_level": "raid1", 00:22:26.005 "superblock": true, 00:22:26.005 "num_base_bdevs": 2, 00:22:26.005 "num_base_bdevs_discovered": 2, 00:22:26.005 "num_base_bdevs_operational": 2, 00:22:26.005 "process": { 00:22:26.005 "type": "rebuild", 00:22:26.005 "target": "spare", 00:22:26.005 "progress": { 00:22:26.005 "blocks": 2816, 00:22:26.005 "percent": 35 00:22:26.005 } 00:22:26.005 }, 00:22:26.005 "base_bdevs_list": [ 00:22:26.005 { 00:22:26.005 "name": "spare", 00:22:26.005 "uuid": "cf81c7d1-c669-5a74-bf20-47046beffbbe", 00:22:26.005 "is_configured": true, 00:22:26.005 "data_offset": 256, 00:22:26.005 "data_size": 7936 00:22:26.005 }, 00:22:26.005 { 00:22:26.005 "name": "BaseBdev2", 00:22:26.005 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:26.005 "is_configured": true, 00:22:26.005 "data_offset": 256, 00:22:26.005 "data_size": 7936 00:22:26.005 } 00:22:26.005 ] 00:22:26.005 }' 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.005 14:21:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:27.439 14:21:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:27.439 14:21:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.439 14:21:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:27.439 14:21:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:27.439 14:21:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:27.439 14:21:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:27.439 14:21:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.439 14:21:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.439 14:21:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.439 14:21:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:27.439 14:21:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.439 14:21:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:27.439 "name": "raid_bdev1", 00:22:27.439 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:27.439 "strip_size_kb": 0, 00:22:27.439 "state": "online", 00:22:27.439 "raid_level": "raid1", 00:22:27.439 "superblock": true, 00:22:27.439 "num_base_bdevs": 2, 00:22:27.439 "num_base_bdevs_discovered": 2, 00:22:27.439 "num_base_bdevs_operational": 2, 00:22:27.439 "process": { 00:22:27.439 "type": "rebuild", 00:22:27.439 "target": "spare", 00:22:27.439 "progress": { 00:22:27.439 "blocks": 5888, 00:22:27.439 "percent": 74 00:22:27.439 } 00:22:27.439 }, 00:22:27.439 "base_bdevs_list": [ 00:22:27.439 { 00:22:27.439 "name": "spare", 00:22:27.439 "uuid": "cf81c7d1-c669-5a74-bf20-47046beffbbe", 00:22:27.439 "is_configured": true, 00:22:27.439 "data_offset": 256, 00:22:27.439 "data_size": 7936 00:22:27.439 }, 00:22:27.439 { 00:22:27.439 "name": "BaseBdev2", 00:22:27.439 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:27.439 "is_configured": true, 00:22:27.439 "data_offset": 256, 00:22:27.439 "data_size": 7936 00:22:27.439 } 00:22:27.439 ] 00:22:27.439 }' 00:22:27.439 14:21:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:27.439 14:21:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:27.439 14:21:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:27.439 14:21:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.439 14:21:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:28.047 [2024-11-27 14:21:58.771099] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:28.047 [2024-11-27 14:21:58.771186] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:28.047 [2024-11-27 14:21:58.771305] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.312 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:28.312 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.313 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.313 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:28.313 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:28.313 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.313 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.313 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.313 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.313 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.313 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.314 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.314 "name": "raid_bdev1", 00:22:28.314 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:28.314 "strip_size_kb": 0, 00:22:28.314 "state": "online", 00:22:28.314 "raid_level": "raid1", 00:22:28.314 "superblock": true, 00:22:28.314 "num_base_bdevs": 2, 00:22:28.314 "num_base_bdevs_discovered": 2, 00:22:28.314 "num_base_bdevs_operational": 2, 00:22:28.314 "base_bdevs_list": [ 00:22:28.314 { 00:22:28.314 "name": "spare", 00:22:28.314 "uuid": "cf81c7d1-c669-5a74-bf20-47046beffbbe", 00:22:28.314 "is_configured": true, 00:22:28.314 "data_offset": 256, 00:22:28.314 "data_size": 7936 00:22:28.314 }, 00:22:28.314 { 00:22:28.314 "name": "BaseBdev2", 00:22:28.314 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:28.314 "is_configured": true, 00:22:28.314 "data_offset": 256, 00:22:28.314 "data_size": 7936 00:22:28.314 } 00:22:28.314 ] 00:22:28.314 }' 00:22:28.314 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.314 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:28.314 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.314 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:28.314 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:22:28.314 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:28.314 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.314 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:28.314 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:28.315 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.315 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.315 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.315 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.315 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.574 "name": "raid_bdev1", 00:22:28.574 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:28.574 "strip_size_kb": 0, 00:22:28.574 "state": "online", 00:22:28.574 "raid_level": "raid1", 00:22:28.574 "superblock": true, 00:22:28.574 "num_base_bdevs": 2, 00:22:28.574 "num_base_bdevs_discovered": 2, 00:22:28.574 "num_base_bdevs_operational": 2, 00:22:28.574 "base_bdevs_list": [ 00:22:28.574 { 00:22:28.574 "name": "spare", 00:22:28.574 "uuid": "cf81c7d1-c669-5a74-bf20-47046beffbbe", 00:22:28.574 "is_configured": true, 00:22:28.574 "data_offset": 256, 00:22:28.574 "data_size": 7936 00:22:28.574 }, 00:22:28.574 { 00:22:28.574 "name": "BaseBdev2", 00:22:28.574 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:28.574 "is_configured": true, 00:22:28.574 "data_offset": 256, 00:22:28.574 "data_size": 7936 00:22:28.574 } 00:22:28.574 ] 00:22:28.574 }' 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.574 "name": "raid_bdev1", 00:22:28.574 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:28.574 "strip_size_kb": 0, 00:22:28.574 "state": "online", 00:22:28.574 "raid_level": "raid1", 00:22:28.574 "superblock": true, 00:22:28.574 "num_base_bdevs": 2, 00:22:28.574 "num_base_bdevs_discovered": 2, 00:22:28.574 "num_base_bdevs_operational": 2, 00:22:28.574 "base_bdevs_list": [ 00:22:28.574 { 00:22:28.574 "name": "spare", 00:22:28.574 "uuid": "cf81c7d1-c669-5a74-bf20-47046beffbbe", 00:22:28.574 "is_configured": true, 00:22:28.574 "data_offset": 256, 00:22:28.574 "data_size": 7936 00:22:28.574 }, 00:22:28.574 { 00:22:28.574 "name": "BaseBdev2", 00:22:28.574 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:28.574 "is_configured": true, 00:22:28.574 "data_offset": 256, 00:22:28.574 "data_size": 7936 00:22:28.574 } 00:22:28.574 ] 00:22:28.574 }' 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.574 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.142 [2024-11-27 14:21:59.861587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:29.142 [2024-11-27 14:21:59.861671] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:29.142 [2024-11-27 14:21:59.861784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:29.142 [2024-11-27 14:21:59.861865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:29.142 [2024-11-27 14:21:59.861901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:29.142 14:21:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:29.401 /dev/nbd0 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:29.401 1+0 records in 00:22:29.401 1+0 records out 00:22:29.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000661644 s, 6.2 MB/s 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:29.401 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:29.662 /dev/nbd1 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:29.662 1+0 records in 00:22:29.662 1+0 records out 00:22:29.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373578 s, 11.0 MB/s 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:29.662 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:29.921 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:29.921 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:29.921 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:29.921 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:29.921 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:29.921 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:29.921 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:29.921 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:29.921 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:29.921 14:22:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.180 [2024-11-27 14:22:01.043799] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:30.180 [2024-11-27 14:22:01.043859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.180 [2024-11-27 14:22:01.043890] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:30.180 [2024-11-27 14:22:01.043916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.180 [2024-11-27 14:22:01.045926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.180 [2024-11-27 14:22:01.045968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:30.180 [2024-11-27 14:22:01.046033] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:30.180 [2024-11-27 14:22:01.046088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:30.180 [2024-11-27 14:22:01.046277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:30.180 spare 00:22:30.180 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.181 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:30.181 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.181 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.439 [2024-11-27 14:22:01.146237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:30.439 [2024-11-27 14:22:01.146282] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:30.439 [2024-11-27 14:22:01.146428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:22:30.439 [2024-11-27 14:22:01.146599] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:30.439 [2024-11-27 14:22:01.146608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:30.439 [2024-11-27 14:22:01.146751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.439 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.439 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:30.439 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:30.439 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:30.439 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:30.439 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:30.439 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.440 "name": "raid_bdev1", 00:22:30.440 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:30.440 "strip_size_kb": 0, 00:22:30.440 "state": "online", 00:22:30.440 "raid_level": "raid1", 00:22:30.440 "superblock": true, 00:22:30.440 "num_base_bdevs": 2, 00:22:30.440 "num_base_bdevs_discovered": 2, 00:22:30.440 "num_base_bdevs_operational": 2, 00:22:30.440 "base_bdevs_list": [ 00:22:30.440 { 00:22:30.440 "name": "spare", 00:22:30.440 "uuid": "cf81c7d1-c669-5a74-bf20-47046beffbbe", 00:22:30.440 "is_configured": true, 00:22:30.440 "data_offset": 256, 00:22:30.440 "data_size": 7936 00:22:30.440 }, 00:22:30.440 { 00:22:30.440 "name": "BaseBdev2", 00:22:30.440 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:30.440 "is_configured": true, 00:22:30.440 "data_offset": 256, 00:22:30.440 "data_size": 7936 00:22:30.440 } 00:22:30.440 ] 00:22:30.440 }' 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.440 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.699 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:30.699 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:30.699 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:30.699 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:30.699 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:30.699 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.699 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.699 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.699 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.699 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:30.958 "name": "raid_bdev1", 00:22:30.958 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:30.958 "strip_size_kb": 0, 00:22:30.958 "state": "online", 00:22:30.958 "raid_level": "raid1", 00:22:30.958 "superblock": true, 00:22:30.958 "num_base_bdevs": 2, 00:22:30.958 "num_base_bdevs_discovered": 2, 00:22:30.958 "num_base_bdevs_operational": 2, 00:22:30.958 "base_bdevs_list": [ 00:22:30.958 { 00:22:30.958 "name": "spare", 00:22:30.958 "uuid": "cf81c7d1-c669-5a74-bf20-47046beffbbe", 00:22:30.958 "is_configured": true, 00:22:30.958 "data_offset": 256, 00:22:30.958 "data_size": 7936 00:22:30.958 }, 00:22:30.958 { 00:22:30.958 "name": "BaseBdev2", 00:22:30.958 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:30.958 "is_configured": true, 00:22:30.958 "data_offset": 256, 00:22:30.958 "data_size": 7936 00:22:30.958 } 00:22:30.958 ] 00:22:30.958 }' 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.958 [2024-11-27 14:22:01.806572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:30.958 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.959 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.959 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.959 "name": "raid_bdev1", 00:22:30.959 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:30.959 "strip_size_kb": 0, 00:22:30.959 "state": "online", 00:22:30.959 "raid_level": "raid1", 00:22:30.959 "superblock": true, 00:22:30.959 "num_base_bdevs": 2, 00:22:30.959 "num_base_bdevs_discovered": 1, 00:22:30.959 "num_base_bdevs_operational": 1, 00:22:30.959 "base_bdevs_list": [ 00:22:30.959 { 00:22:30.959 "name": null, 00:22:30.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.959 "is_configured": false, 00:22:30.959 "data_offset": 0, 00:22:30.959 "data_size": 7936 00:22:30.959 }, 00:22:30.959 { 00:22:30.959 "name": "BaseBdev2", 00:22:30.959 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:30.959 "is_configured": true, 00:22:30.959 "data_offset": 256, 00:22:30.959 "data_size": 7936 00:22:30.959 } 00:22:30.959 ] 00:22:30.959 }' 00:22:30.959 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.959 14:22:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.526 14:22:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:31.526 14:22:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.526 14:22:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.526 [2024-11-27 14:22:02.237840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:31.526 [2024-11-27 14:22:02.238109] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:31.526 [2024-11-27 14:22:02.238196] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:31.526 [2024-11-27 14:22:02.238268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:31.526 [2024-11-27 14:22:02.252517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:22:31.526 14:22:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.526 14:22:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:31.527 [2024-11-27 14:22:02.254461] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:32.464 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.464 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:32.464 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:32.464 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:32.464 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:32.464 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:32.465 "name": "raid_bdev1", 00:22:32.465 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:32.465 "strip_size_kb": 0, 00:22:32.465 "state": "online", 00:22:32.465 "raid_level": "raid1", 00:22:32.465 "superblock": true, 00:22:32.465 "num_base_bdevs": 2, 00:22:32.465 "num_base_bdevs_discovered": 2, 00:22:32.465 "num_base_bdevs_operational": 2, 00:22:32.465 "process": { 00:22:32.465 "type": "rebuild", 00:22:32.465 "target": "spare", 00:22:32.465 "progress": { 00:22:32.465 "blocks": 2560, 00:22:32.465 "percent": 32 00:22:32.465 } 00:22:32.465 }, 00:22:32.465 "base_bdevs_list": [ 00:22:32.465 { 00:22:32.465 "name": "spare", 00:22:32.465 "uuid": "cf81c7d1-c669-5a74-bf20-47046beffbbe", 00:22:32.465 "is_configured": true, 00:22:32.465 "data_offset": 256, 00:22:32.465 "data_size": 7936 00:22:32.465 }, 00:22:32.465 { 00:22:32.465 "name": "BaseBdev2", 00:22:32.465 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:32.465 "is_configured": true, 00:22:32.465 "data_offset": 256, 00:22:32.465 "data_size": 7936 00:22:32.465 } 00:22:32.465 ] 00:22:32.465 }' 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.465 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.465 [2024-11-27 14:22:03.414985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:32.724 [2024-11-27 14:22:03.460767] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:32.724 [2024-11-27 14:22:03.460865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.724 [2024-11-27 14:22:03.460883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:32.724 [2024-11-27 14:22:03.460907] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.724 "name": "raid_bdev1", 00:22:32.724 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:32.724 "strip_size_kb": 0, 00:22:32.724 "state": "online", 00:22:32.724 "raid_level": "raid1", 00:22:32.724 "superblock": true, 00:22:32.724 "num_base_bdevs": 2, 00:22:32.724 "num_base_bdevs_discovered": 1, 00:22:32.724 "num_base_bdevs_operational": 1, 00:22:32.724 "base_bdevs_list": [ 00:22:32.724 { 00:22:32.724 "name": null, 00:22:32.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.724 "is_configured": false, 00:22:32.724 "data_offset": 0, 00:22:32.724 "data_size": 7936 00:22:32.724 }, 00:22:32.724 { 00:22:32.724 "name": "BaseBdev2", 00:22:32.724 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:32.724 "is_configured": true, 00:22:32.724 "data_offset": 256, 00:22:32.724 "data_size": 7936 00:22:32.724 } 00:22:32.724 ] 00:22:32.724 }' 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.724 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.294 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:33.294 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.294 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.294 [2024-11-27 14:22:03.948758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:33.294 [2024-11-27 14:22:03.948917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:33.294 [2024-11-27 14:22:03.948965] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:33.294 [2024-11-27 14:22:03.949002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:33.294 [2024-11-27 14:22:03.949344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:33.294 [2024-11-27 14:22:03.949406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:33.294 [2024-11-27 14:22:03.949505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:33.294 [2024-11-27 14:22:03.949548] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:33.294 [2024-11-27 14:22:03.949561] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:33.294 [2024-11-27 14:22:03.949588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:33.294 [2024-11-27 14:22:03.964087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:33.294 spare 00:22:33.294 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.294 14:22:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:33.294 [2024-11-27 14:22:03.966050] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:34.236 14:22:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.236 14:22:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:34.236 14:22:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:34.236 14:22:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:34.236 14:22:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:34.236 14:22:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.236 14:22:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.236 14:22:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.236 14:22:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.236 14:22:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.236 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:34.237 "name": "raid_bdev1", 00:22:34.237 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:34.237 "strip_size_kb": 0, 00:22:34.237 "state": "online", 00:22:34.237 "raid_level": "raid1", 00:22:34.237 "superblock": true, 00:22:34.237 "num_base_bdevs": 2, 00:22:34.237 "num_base_bdevs_discovered": 2, 00:22:34.237 "num_base_bdevs_operational": 2, 00:22:34.237 "process": { 00:22:34.237 "type": "rebuild", 00:22:34.237 "target": "spare", 00:22:34.237 "progress": { 00:22:34.237 "blocks": 2560, 00:22:34.237 "percent": 32 00:22:34.237 } 00:22:34.237 }, 00:22:34.237 "base_bdevs_list": [ 00:22:34.237 { 00:22:34.237 "name": "spare", 00:22:34.237 "uuid": "cf81c7d1-c669-5a74-bf20-47046beffbbe", 00:22:34.237 "is_configured": true, 00:22:34.237 "data_offset": 256, 00:22:34.237 "data_size": 7936 00:22:34.237 }, 00:22:34.237 { 00:22:34.237 "name": "BaseBdev2", 00:22:34.237 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:34.237 "is_configured": true, 00:22:34.237 "data_offset": 256, 00:22:34.237 "data_size": 7936 00:22:34.237 } 00:22:34.237 ] 00:22:34.237 }' 00:22:34.237 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:34.237 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:34.237 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:34.237 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:34.237 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:34.237 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.237 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.237 [2024-11-27 14:22:05.082565] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:34.237 [2024-11-27 14:22:05.172348] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:34.237 [2024-11-27 14:22:05.172435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:34.237 [2024-11-27 14:22:05.172453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:34.237 [2024-11-27 14:22:05.172460] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:34.507 "name": "raid_bdev1", 00:22:34.507 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:34.507 "strip_size_kb": 0, 00:22:34.507 "state": "online", 00:22:34.507 "raid_level": "raid1", 00:22:34.507 "superblock": true, 00:22:34.507 "num_base_bdevs": 2, 00:22:34.507 "num_base_bdevs_discovered": 1, 00:22:34.507 "num_base_bdevs_operational": 1, 00:22:34.507 "base_bdevs_list": [ 00:22:34.507 { 00:22:34.507 "name": null, 00:22:34.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.507 "is_configured": false, 00:22:34.507 "data_offset": 0, 00:22:34.507 "data_size": 7936 00:22:34.507 }, 00:22:34.507 { 00:22:34.507 "name": "BaseBdev2", 00:22:34.507 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:34.507 "is_configured": true, 00:22:34.507 "data_offset": 256, 00:22:34.507 "data_size": 7936 00:22:34.507 } 00:22:34.507 ] 00:22:34.507 }' 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:34.507 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:34.767 "name": "raid_bdev1", 00:22:34.767 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:34.767 "strip_size_kb": 0, 00:22:34.767 "state": "online", 00:22:34.767 "raid_level": "raid1", 00:22:34.767 "superblock": true, 00:22:34.767 "num_base_bdevs": 2, 00:22:34.767 "num_base_bdevs_discovered": 1, 00:22:34.767 "num_base_bdevs_operational": 1, 00:22:34.767 "base_bdevs_list": [ 00:22:34.767 { 00:22:34.767 "name": null, 00:22:34.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.767 "is_configured": false, 00:22:34.767 "data_offset": 0, 00:22:34.767 "data_size": 7936 00:22:34.767 }, 00:22:34.767 { 00:22:34.767 "name": "BaseBdev2", 00:22:34.767 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:34.767 "is_configured": true, 00:22:34.767 "data_offset": 256, 00:22:34.767 "data_size": 7936 00:22:34.767 } 00:22:34.767 ] 00:22:34.767 }' 00:22:34.767 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.027 [2024-11-27 14:22:05.776335] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:35.027 [2024-11-27 14:22:05.776406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:35.027 [2024-11-27 14:22:05.776433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:35.027 [2024-11-27 14:22:05.776443] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:35.027 [2024-11-27 14:22:05.776695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:35.027 [2024-11-27 14:22:05.776709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:35.027 [2024-11-27 14:22:05.776773] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:35.027 [2024-11-27 14:22:05.776789] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:35.027 [2024-11-27 14:22:05.776801] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:35.027 [2024-11-27 14:22:05.776813] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:35.027 BaseBdev1 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.027 14:22:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:35.963 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.964 "name": "raid_bdev1", 00:22:35.964 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:35.964 "strip_size_kb": 0, 00:22:35.964 "state": "online", 00:22:35.964 "raid_level": "raid1", 00:22:35.964 "superblock": true, 00:22:35.964 "num_base_bdevs": 2, 00:22:35.964 "num_base_bdevs_discovered": 1, 00:22:35.964 "num_base_bdevs_operational": 1, 00:22:35.964 "base_bdevs_list": [ 00:22:35.964 { 00:22:35.964 "name": null, 00:22:35.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.964 "is_configured": false, 00:22:35.964 "data_offset": 0, 00:22:35.964 "data_size": 7936 00:22:35.964 }, 00:22:35.964 { 00:22:35.964 "name": "BaseBdev2", 00:22:35.964 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:35.964 "is_configured": true, 00:22:35.964 "data_offset": 256, 00:22:35.964 "data_size": 7936 00:22:35.964 } 00:22:35.964 ] 00:22:35.964 }' 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.964 14:22:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:36.532 "name": "raid_bdev1", 00:22:36.532 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:36.532 "strip_size_kb": 0, 00:22:36.532 "state": "online", 00:22:36.532 "raid_level": "raid1", 00:22:36.532 "superblock": true, 00:22:36.532 "num_base_bdevs": 2, 00:22:36.532 "num_base_bdevs_discovered": 1, 00:22:36.532 "num_base_bdevs_operational": 1, 00:22:36.532 "base_bdevs_list": [ 00:22:36.532 { 00:22:36.532 "name": null, 00:22:36.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.532 "is_configured": false, 00:22:36.532 "data_offset": 0, 00:22:36.532 "data_size": 7936 00:22:36.532 }, 00:22:36.532 { 00:22:36.532 "name": "BaseBdev2", 00:22:36.532 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:36.532 "is_configured": true, 00:22:36.532 "data_offset": 256, 00:22:36.532 "data_size": 7936 00:22:36.532 } 00:22:36.532 ] 00:22:36.532 }' 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.532 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.532 [2024-11-27 14:22:07.337852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:36.532 [2024-11-27 14:22:07.338121] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:36.532 [2024-11-27 14:22:07.338216] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:36.532 request: 00:22:36.532 { 00:22:36.532 "base_bdev": "BaseBdev1", 00:22:36.532 "raid_bdev": "raid_bdev1", 00:22:36.532 "method": "bdev_raid_add_base_bdev", 00:22:36.532 "req_id": 1 00:22:36.532 } 00:22:36.532 Got JSON-RPC error response 00:22:36.532 response: 00:22:36.532 { 00:22:36.532 "code": -22, 00:22:36.532 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:36.532 } 00:22:36.533 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:36.533 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:22:36.533 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:36.533 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:36.533 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:36.533 14:22:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.469 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.469 "name": "raid_bdev1", 00:22:37.469 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:37.469 "strip_size_kb": 0, 00:22:37.469 "state": "online", 00:22:37.469 "raid_level": "raid1", 00:22:37.469 "superblock": true, 00:22:37.469 "num_base_bdevs": 2, 00:22:37.469 "num_base_bdevs_discovered": 1, 00:22:37.469 "num_base_bdevs_operational": 1, 00:22:37.469 "base_bdevs_list": [ 00:22:37.469 { 00:22:37.469 "name": null, 00:22:37.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.470 "is_configured": false, 00:22:37.470 "data_offset": 0, 00:22:37.470 "data_size": 7936 00:22:37.470 }, 00:22:37.470 { 00:22:37.470 "name": "BaseBdev2", 00:22:37.470 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:37.470 "is_configured": true, 00:22:37.470 "data_offset": 256, 00:22:37.470 "data_size": 7936 00:22:37.470 } 00:22:37.470 ] 00:22:37.470 }' 00:22:37.470 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.470 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:38.038 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:38.038 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:38.038 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:38.038 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:38.038 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:38.038 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.038 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.038 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:38.038 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.038 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.038 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:38.038 "name": "raid_bdev1", 00:22:38.038 "uuid": "da6bfecc-65a3-466c-8320-5d37f70bc3ba", 00:22:38.038 "strip_size_kb": 0, 00:22:38.038 "state": "online", 00:22:38.038 "raid_level": "raid1", 00:22:38.038 "superblock": true, 00:22:38.038 "num_base_bdevs": 2, 00:22:38.038 "num_base_bdevs_discovered": 1, 00:22:38.038 "num_base_bdevs_operational": 1, 00:22:38.038 "base_bdevs_list": [ 00:22:38.038 { 00:22:38.038 "name": null, 00:22:38.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.039 "is_configured": false, 00:22:38.039 "data_offset": 0, 00:22:38.039 "data_size": 7936 00:22:38.039 }, 00:22:38.039 { 00:22:38.039 "name": "BaseBdev2", 00:22:38.039 "uuid": "c06ce386-89c7-5f1f-b154-2b2f76f8f1ad", 00:22:38.039 "is_configured": true, 00:22:38.039 "data_offset": 256, 00:22:38.039 "data_size": 7936 00:22:38.039 } 00:22:38.039 ] 00:22:38.039 }' 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88027 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88027 ']' 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88027 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88027 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:38.039 killing process with pid 88027 00:22:38.039 Received shutdown signal, test time was about 60.000000 seconds 00:22:38.039 00:22:38.039 Latency(us) 00:22:38.039 [2024-11-27T14:22:08.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.039 [2024-11-27T14:22:08.995Z] =================================================================================================================== 00:22:38.039 [2024-11-27T14:22:08.995Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88027' 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88027 00:22:38.039 [2024-11-27 14:22:08.972470] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:38.039 [2024-11-27 14:22:08.972597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:38.039 14:22:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88027 00:22:38.039 [2024-11-27 14:22:08.972645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:38.039 [2024-11-27 14:22:08.972657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:38.605 [2024-11-27 14:22:09.293757] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:39.541 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:22:39.541 00:22:39.541 real 0m19.944s 00:22:39.541 user 0m26.075s 00:22:39.541 sys 0m2.565s 00:22:39.541 ************************************ 00:22:39.541 END TEST raid_rebuild_test_sb_md_separate 00:22:39.541 ************************************ 00:22:39.541 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:39.541 14:22:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.541 14:22:10 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:22:39.542 14:22:10 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:22:39.542 14:22:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:39.542 14:22:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.542 14:22:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:39.542 ************************************ 00:22:39.542 START TEST raid_state_function_test_sb_md_interleaved 00:22:39.542 ************************************ 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88717 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88717' 00:22:39.542 Process raid pid: 88717 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88717 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88717 ']' 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.542 14:22:10 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:39.801 [2024-11-27 14:22:10.569302] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:39.801 [2024-11-27 14:22:10.569455] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.801 [2024-11-27 14:22:10.744940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.059 [2024-11-27 14:22:10.859784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.317 [2024-11-27 14:22:11.054480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.317 [2024-11-27 14:22:11.054511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:40.576 [2024-11-27 14:22:11.424965] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:40.576 [2024-11-27 14:22:11.425020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:40.576 [2024-11-27 14:22:11.425031] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:40.576 [2024-11-27 14:22:11.425040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.576 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:40.576 "name": "Existed_Raid", 00:22:40.576 "uuid": "4296f5a0-a6fe-4228-b98a-99e0e7ae9018", 00:22:40.576 "strip_size_kb": 0, 00:22:40.576 "state": "configuring", 00:22:40.576 "raid_level": "raid1", 00:22:40.576 "superblock": true, 00:22:40.576 "num_base_bdevs": 2, 00:22:40.576 "num_base_bdevs_discovered": 0, 00:22:40.576 "num_base_bdevs_operational": 2, 00:22:40.576 "base_bdevs_list": [ 00:22:40.576 { 00:22:40.576 "name": "BaseBdev1", 00:22:40.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.576 "is_configured": false, 00:22:40.576 "data_offset": 0, 00:22:40.576 "data_size": 0 00:22:40.576 }, 00:22:40.576 { 00:22:40.576 "name": "BaseBdev2", 00:22:40.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.576 "is_configured": false, 00:22:40.576 "data_offset": 0, 00:22:40.576 "data_size": 0 00:22:40.576 } 00:22:40.576 ] 00:22:40.576 }' 00:22:40.577 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:40.577 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.144 [2024-11-27 14:22:11.868154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:41.144 [2024-11-27 14:22:11.868234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.144 [2024-11-27 14:22:11.880111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:41.144 [2024-11-27 14:22:11.880197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:41.144 [2024-11-27 14:22:11.880223] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:41.144 [2024-11-27 14:22:11.880249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.144 [2024-11-27 14:22:11.926759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:41.144 BaseBdev1 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.144 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.144 [ 00:22:41.144 { 00:22:41.144 "name": "BaseBdev1", 00:22:41.144 "aliases": [ 00:22:41.144 "81407828-bea2-4ec7-b2fa-cddde9ff4889" 00:22:41.144 ], 00:22:41.144 "product_name": "Malloc disk", 00:22:41.144 "block_size": 4128, 00:22:41.144 "num_blocks": 8192, 00:22:41.144 "uuid": "81407828-bea2-4ec7-b2fa-cddde9ff4889", 00:22:41.144 "md_size": 32, 00:22:41.144 "md_interleave": true, 00:22:41.144 "dif_type": 0, 00:22:41.144 "assigned_rate_limits": { 00:22:41.144 "rw_ios_per_sec": 0, 00:22:41.144 "rw_mbytes_per_sec": 0, 00:22:41.144 "r_mbytes_per_sec": 0, 00:22:41.144 "w_mbytes_per_sec": 0 00:22:41.144 }, 00:22:41.144 "claimed": true, 00:22:41.144 "claim_type": "exclusive_write", 00:22:41.144 "zoned": false, 00:22:41.144 "supported_io_types": { 00:22:41.144 "read": true, 00:22:41.144 "write": true, 00:22:41.144 "unmap": true, 00:22:41.144 "flush": true, 00:22:41.144 "reset": true, 00:22:41.144 "nvme_admin": false, 00:22:41.144 "nvme_io": false, 00:22:41.144 "nvme_io_md": false, 00:22:41.144 "write_zeroes": true, 00:22:41.144 "zcopy": true, 00:22:41.144 "get_zone_info": false, 00:22:41.144 "zone_management": false, 00:22:41.144 "zone_append": false, 00:22:41.144 "compare": false, 00:22:41.144 "compare_and_write": false, 00:22:41.144 "abort": true, 00:22:41.144 "seek_hole": false, 00:22:41.144 "seek_data": false, 00:22:41.144 "copy": true, 00:22:41.144 "nvme_iov_md": false 00:22:41.144 }, 00:22:41.144 "memory_domains": [ 00:22:41.145 { 00:22:41.145 "dma_device_id": "system", 00:22:41.145 "dma_device_type": 1 00:22:41.145 }, 00:22:41.145 { 00:22:41.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.145 "dma_device_type": 2 00:22:41.145 } 00:22:41.145 ], 00:22:41.145 "driver_specific": {} 00:22:41.145 } 00:22:41.145 ] 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.145 14:22:11 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.145 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.145 "name": "Existed_Raid", 00:22:41.145 "uuid": "bf0836ce-f8a4-4a8b-8822-2de11932c27b", 00:22:41.145 "strip_size_kb": 0, 00:22:41.145 "state": "configuring", 00:22:41.145 "raid_level": "raid1", 00:22:41.145 "superblock": true, 00:22:41.145 "num_base_bdevs": 2, 00:22:41.145 "num_base_bdevs_discovered": 1, 00:22:41.145 "num_base_bdevs_operational": 2, 00:22:41.145 "base_bdevs_list": [ 00:22:41.145 { 00:22:41.145 "name": "BaseBdev1", 00:22:41.145 "uuid": "81407828-bea2-4ec7-b2fa-cddde9ff4889", 00:22:41.145 "is_configured": true, 00:22:41.145 "data_offset": 256, 00:22:41.145 "data_size": 7936 00:22:41.145 }, 00:22:41.145 { 00:22:41.145 "name": "BaseBdev2", 00:22:41.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.145 "is_configured": false, 00:22:41.145 "data_offset": 0, 00:22:41.145 "data_size": 0 00:22:41.145 } 00:22:41.145 ] 00:22:41.145 }' 00:22:41.145 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.145 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.741 [2024-11-27 14:22:12.394040] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:41.741 [2024-11-27 14:22:12.394154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.741 [2024-11-27 14:22:12.406057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:41.741 [2024-11-27 14:22:12.407809] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:41.741 [2024-11-27 14:22:12.407850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:41.741 "name": "Existed_Raid", 00:22:41.741 "uuid": "d4795260-8a30-4b59-a357-feac58199a15", 00:22:41.741 "strip_size_kb": 0, 00:22:41.741 "state": "configuring", 00:22:41.741 "raid_level": "raid1", 00:22:41.741 "superblock": true, 00:22:41.741 "num_base_bdevs": 2, 00:22:41.741 "num_base_bdevs_discovered": 1, 00:22:41.741 "num_base_bdevs_operational": 2, 00:22:41.741 "base_bdevs_list": [ 00:22:41.741 { 00:22:41.741 "name": "BaseBdev1", 00:22:41.741 "uuid": "81407828-bea2-4ec7-b2fa-cddde9ff4889", 00:22:41.741 "is_configured": true, 00:22:41.741 "data_offset": 256, 00:22:41.741 "data_size": 7936 00:22:41.741 }, 00:22:41.741 { 00:22:41.741 "name": "BaseBdev2", 00:22:41.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.741 "is_configured": false, 00:22:41.741 "data_offset": 0, 00:22:41.741 "data_size": 0 00:22:41.741 } 00:22:41.741 ] 00:22:41.741 }' 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:41.741 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.000 [2024-11-27 14:22:12.933359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:42.000 [2024-11-27 14:22:12.933641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:42.000 [2024-11-27 14:22:12.933689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:42.000 [2024-11-27 14:22:12.933788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:42.000 [2024-11-27 14:22:12.933896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:42.000 [2024-11-27 14:22:12.933933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:42.000 [2024-11-27 14:22:12.934028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.000 BaseBdev2 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.000 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.259 [ 00:22:42.259 { 00:22:42.259 "name": "BaseBdev2", 00:22:42.259 "aliases": [ 00:22:42.259 "6735da8f-f387-4515-8787-a93d49a5f8e1" 00:22:42.259 ], 00:22:42.259 "product_name": "Malloc disk", 00:22:42.259 "block_size": 4128, 00:22:42.259 "num_blocks": 8192, 00:22:42.259 "uuid": "6735da8f-f387-4515-8787-a93d49a5f8e1", 00:22:42.259 "md_size": 32, 00:22:42.259 "md_interleave": true, 00:22:42.259 "dif_type": 0, 00:22:42.259 "assigned_rate_limits": { 00:22:42.259 "rw_ios_per_sec": 0, 00:22:42.259 "rw_mbytes_per_sec": 0, 00:22:42.259 "r_mbytes_per_sec": 0, 00:22:42.259 "w_mbytes_per_sec": 0 00:22:42.259 }, 00:22:42.259 "claimed": true, 00:22:42.259 "claim_type": "exclusive_write", 00:22:42.259 "zoned": false, 00:22:42.259 "supported_io_types": { 00:22:42.259 "read": true, 00:22:42.259 "write": true, 00:22:42.259 "unmap": true, 00:22:42.259 "flush": true, 00:22:42.259 "reset": true, 00:22:42.259 "nvme_admin": false, 00:22:42.259 "nvme_io": false, 00:22:42.259 "nvme_io_md": false, 00:22:42.259 "write_zeroes": true, 00:22:42.259 "zcopy": true, 00:22:42.259 "get_zone_info": false, 00:22:42.259 "zone_management": false, 00:22:42.259 "zone_append": false, 00:22:42.259 "compare": false, 00:22:42.259 "compare_and_write": false, 00:22:42.259 "abort": true, 00:22:42.259 "seek_hole": false, 00:22:42.259 "seek_data": false, 00:22:42.259 "copy": true, 00:22:42.259 "nvme_iov_md": false 00:22:42.259 }, 00:22:42.259 "memory_domains": [ 00:22:42.259 { 00:22:42.259 "dma_device_id": "system", 00:22:42.259 "dma_device_type": 1 00:22:42.259 }, 00:22:42.259 { 00:22:42.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.259 "dma_device_type": 2 00:22:42.259 } 00:22:42.259 ], 00:22:42.259 "driver_specific": {} 00:22:42.259 } 00:22:42.259 ] 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.259 14:22:12 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.259 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.259 "name": "Existed_Raid", 00:22:42.259 "uuid": "d4795260-8a30-4b59-a357-feac58199a15", 00:22:42.259 "strip_size_kb": 0, 00:22:42.259 "state": "online", 00:22:42.259 "raid_level": "raid1", 00:22:42.259 "superblock": true, 00:22:42.259 "num_base_bdevs": 2, 00:22:42.260 "num_base_bdevs_discovered": 2, 00:22:42.260 "num_base_bdevs_operational": 2, 00:22:42.260 "base_bdevs_list": [ 00:22:42.260 { 00:22:42.260 "name": "BaseBdev1", 00:22:42.260 "uuid": "81407828-bea2-4ec7-b2fa-cddde9ff4889", 00:22:42.260 "is_configured": true, 00:22:42.260 "data_offset": 256, 00:22:42.260 "data_size": 7936 00:22:42.260 }, 00:22:42.260 { 00:22:42.260 "name": "BaseBdev2", 00:22:42.260 "uuid": "6735da8f-f387-4515-8787-a93d49a5f8e1", 00:22:42.260 "is_configured": true, 00:22:42.260 "data_offset": 256, 00:22:42.260 "data_size": 7936 00:22:42.260 } 00:22:42.260 ] 00:22:42.260 }' 00:22:42.260 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.260 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.519 [2024-11-27 14:22:13.420905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.519 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:42.519 "name": "Existed_Raid", 00:22:42.519 "aliases": [ 00:22:42.519 "d4795260-8a30-4b59-a357-feac58199a15" 00:22:42.519 ], 00:22:42.519 "product_name": "Raid Volume", 00:22:42.519 "block_size": 4128, 00:22:42.519 "num_blocks": 7936, 00:22:42.519 "uuid": "d4795260-8a30-4b59-a357-feac58199a15", 00:22:42.519 "md_size": 32, 00:22:42.519 "md_interleave": true, 00:22:42.519 "dif_type": 0, 00:22:42.519 "assigned_rate_limits": { 00:22:42.519 "rw_ios_per_sec": 0, 00:22:42.519 "rw_mbytes_per_sec": 0, 00:22:42.519 "r_mbytes_per_sec": 0, 00:22:42.519 "w_mbytes_per_sec": 0 00:22:42.519 }, 00:22:42.519 "claimed": false, 00:22:42.519 "zoned": false, 00:22:42.519 "supported_io_types": { 00:22:42.519 "read": true, 00:22:42.519 "write": true, 00:22:42.519 "unmap": false, 00:22:42.519 "flush": false, 00:22:42.519 "reset": true, 00:22:42.519 "nvme_admin": false, 00:22:42.519 "nvme_io": false, 00:22:42.519 "nvme_io_md": false, 00:22:42.519 "write_zeroes": true, 00:22:42.519 "zcopy": false, 00:22:42.519 "get_zone_info": false, 00:22:42.519 "zone_management": false, 00:22:42.519 "zone_append": false, 00:22:42.519 "compare": false, 00:22:42.519 "compare_and_write": false, 00:22:42.519 "abort": false, 00:22:42.519 "seek_hole": false, 00:22:42.519 "seek_data": false, 00:22:42.519 "copy": false, 00:22:42.519 "nvme_iov_md": false 00:22:42.519 }, 00:22:42.519 "memory_domains": [ 00:22:42.519 { 00:22:42.519 "dma_device_id": "system", 00:22:42.519 "dma_device_type": 1 00:22:42.519 }, 00:22:42.519 { 00:22:42.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.519 "dma_device_type": 2 00:22:42.520 }, 00:22:42.520 { 00:22:42.520 "dma_device_id": "system", 00:22:42.520 "dma_device_type": 1 00:22:42.520 }, 00:22:42.520 { 00:22:42.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.520 "dma_device_type": 2 00:22:42.520 } 00:22:42.520 ], 00:22:42.520 "driver_specific": { 00:22:42.520 "raid": { 00:22:42.520 "uuid": "d4795260-8a30-4b59-a357-feac58199a15", 00:22:42.520 "strip_size_kb": 0, 00:22:42.520 "state": "online", 00:22:42.520 "raid_level": "raid1", 00:22:42.520 "superblock": true, 00:22:42.520 "num_base_bdevs": 2, 00:22:42.520 "num_base_bdevs_discovered": 2, 00:22:42.520 "num_base_bdevs_operational": 2, 00:22:42.520 "base_bdevs_list": [ 00:22:42.520 { 00:22:42.520 "name": "BaseBdev1", 00:22:42.520 "uuid": "81407828-bea2-4ec7-b2fa-cddde9ff4889", 00:22:42.520 "is_configured": true, 00:22:42.520 "data_offset": 256, 00:22:42.520 "data_size": 7936 00:22:42.520 }, 00:22:42.520 { 00:22:42.520 "name": "BaseBdev2", 00:22:42.520 "uuid": "6735da8f-f387-4515-8787-a93d49a5f8e1", 00:22:42.520 "is_configured": true, 00:22:42.520 "data_offset": 256, 00:22:42.520 "data_size": 7936 00:22:42.520 } 00:22:42.520 ] 00:22:42.520 } 00:22:42.520 } 00:22:42.520 }' 00:22:42.520 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:42.779 BaseBdev2' 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.779 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.779 [2024-11-27 14:22:13.652233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.038 "name": "Existed_Raid", 00:22:43.038 "uuid": "d4795260-8a30-4b59-a357-feac58199a15", 00:22:43.038 "strip_size_kb": 0, 00:22:43.038 "state": "online", 00:22:43.038 "raid_level": "raid1", 00:22:43.038 "superblock": true, 00:22:43.038 "num_base_bdevs": 2, 00:22:43.038 "num_base_bdevs_discovered": 1, 00:22:43.038 "num_base_bdevs_operational": 1, 00:22:43.038 "base_bdevs_list": [ 00:22:43.038 { 00:22:43.038 "name": null, 00:22:43.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.038 "is_configured": false, 00:22:43.038 "data_offset": 0, 00:22:43.038 "data_size": 7936 00:22:43.038 }, 00:22:43.038 { 00:22:43.038 "name": "BaseBdev2", 00:22:43.038 "uuid": "6735da8f-f387-4515-8787-a93d49a5f8e1", 00:22:43.038 "is_configured": true, 00:22:43.038 "data_offset": 256, 00:22:43.038 "data_size": 7936 00:22:43.038 } 00:22:43.038 ] 00:22:43.038 }' 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.038 14:22:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.298 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.298 [2024-11-27 14:22:14.201762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:43.298 [2024-11-27 14:22:14.201865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:43.558 [2024-11-27 14:22:14.294832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.558 [2024-11-27 14:22:14.294884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.558 [2024-11-27 14:22:14.294897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88717 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88717 ']' 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88717 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88717 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88717' 00:22:43.558 killing process with pid 88717 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88717 00:22:43.558 [2024-11-27 14:22:14.396405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:43.558 14:22:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88717 00:22:43.558 [2024-11-27 14:22:14.412889] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:44.939 14:22:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:22:44.939 00:22:44.939 real 0m5.094s 00:22:44.939 user 0m7.333s 00:22:44.939 sys 0m0.878s 00:22:44.939 14:22:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:44.939 14:22:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.939 ************************************ 00:22:44.939 END TEST raid_state_function_test_sb_md_interleaved 00:22:44.939 ************************************ 00:22:44.939 14:22:15 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:22:44.939 14:22:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:44.939 14:22:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:44.939 14:22:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:44.939 ************************************ 00:22:44.939 START TEST raid_superblock_test_md_interleaved 00:22:44.939 ************************************ 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88965 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88965 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88965 ']' 00:22:44.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.939 14:22:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.939 [2024-11-27 14:22:15.731815] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:44.939 [2024-11-27 14:22:15.731986] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88965 ] 00:22:45.199 [2024-11-27 14:22:15.905367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.199 [2024-11-27 14:22:16.021704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.459 [2024-11-27 14:22:16.220356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:45.459 [2024-11-27 14:22:16.220390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.719 malloc1 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.719 [2024-11-27 14:22:16.633469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:45.719 [2024-11-27 14:22:16.633564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.719 [2024-11-27 14:22:16.633604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:45.719 [2024-11-27 14:22:16.633633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.719 [2024-11-27 14:22:16.635395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.719 [2024-11-27 14:22:16.635463] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:45.719 pt1 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:45.719 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:45.720 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:45.720 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:45.720 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:45.720 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:45.720 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:45.720 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:45.720 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:22:45.720 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.720 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.980 malloc2 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.980 [2024-11-27 14:22:16.694182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:45.980 [2024-11-27 14:22:16.694243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:45.980 [2024-11-27 14:22:16.694266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:45.980 [2024-11-27 14:22:16.694274] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:45.980 [2024-11-27 14:22:16.696048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:45.980 [2024-11-27 14:22:16.696086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:45.980 pt2 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.980 [2024-11-27 14:22:16.706184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:45.980 [2024-11-27 14:22:16.707892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:45.980 [2024-11-27 14:22:16.708190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:45.980 [2024-11-27 14:22:16.708208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:45.980 [2024-11-27 14:22:16.708291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:45.980 [2024-11-27 14:22:16.708367] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:45.980 [2024-11-27 14:22:16.708380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:45.980 [2024-11-27 14:22:16.708466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.980 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:45.980 "name": "raid_bdev1", 00:22:45.980 "uuid": "10dfec82-c743-43ed-9d91-de0f5d7099f7", 00:22:45.980 "strip_size_kb": 0, 00:22:45.980 "state": "online", 00:22:45.980 "raid_level": "raid1", 00:22:45.980 "superblock": true, 00:22:45.980 "num_base_bdevs": 2, 00:22:45.980 "num_base_bdevs_discovered": 2, 00:22:45.980 "num_base_bdevs_operational": 2, 00:22:45.980 "base_bdevs_list": [ 00:22:45.980 { 00:22:45.980 "name": "pt1", 00:22:45.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:45.980 "is_configured": true, 00:22:45.980 "data_offset": 256, 00:22:45.980 "data_size": 7936 00:22:45.980 }, 00:22:45.980 { 00:22:45.980 "name": "pt2", 00:22:45.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:45.980 "is_configured": true, 00:22:45.980 "data_offset": 256, 00:22:45.980 "data_size": 7936 00:22:45.981 } 00:22:45.981 ] 00:22:45.981 }' 00:22:45.981 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:45.981 14:22:16 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.240 [2024-11-27 14:22:17.117712] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.240 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:46.240 "name": "raid_bdev1", 00:22:46.240 "aliases": [ 00:22:46.240 "10dfec82-c743-43ed-9d91-de0f5d7099f7" 00:22:46.240 ], 00:22:46.240 "product_name": "Raid Volume", 00:22:46.240 "block_size": 4128, 00:22:46.240 "num_blocks": 7936, 00:22:46.240 "uuid": "10dfec82-c743-43ed-9d91-de0f5d7099f7", 00:22:46.240 "md_size": 32, 00:22:46.240 "md_interleave": true, 00:22:46.240 "dif_type": 0, 00:22:46.240 "assigned_rate_limits": { 00:22:46.240 "rw_ios_per_sec": 0, 00:22:46.240 "rw_mbytes_per_sec": 0, 00:22:46.240 "r_mbytes_per_sec": 0, 00:22:46.241 "w_mbytes_per_sec": 0 00:22:46.241 }, 00:22:46.241 "claimed": false, 00:22:46.241 "zoned": false, 00:22:46.241 "supported_io_types": { 00:22:46.241 "read": true, 00:22:46.241 "write": true, 00:22:46.241 "unmap": false, 00:22:46.241 "flush": false, 00:22:46.241 "reset": true, 00:22:46.241 "nvme_admin": false, 00:22:46.241 "nvme_io": false, 00:22:46.241 "nvme_io_md": false, 00:22:46.241 "write_zeroes": true, 00:22:46.241 "zcopy": false, 00:22:46.241 "get_zone_info": false, 00:22:46.241 "zone_management": false, 00:22:46.241 "zone_append": false, 00:22:46.241 "compare": false, 00:22:46.241 "compare_and_write": false, 00:22:46.241 "abort": false, 00:22:46.241 "seek_hole": false, 00:22:46.241 "seek_data": false, 00:22:46.241 "copy": false, 00:22:46.241 "nvme_iov_md": false 00:22:46.241 }, 00:22:46.241 "memory_domains": [ 00:22:46.241 { 00:22:46.241 "dma_device_id": "system", 00:22:46.241 "dma_device_type": 1 00:22:46.241 }, 00:22:46.241 { 00:22:46.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.241 "dma_device_type": 2 00:22:46.241 }, 00:22:46.241 { 00:22:46.241 "dma_device_id": "system", 00:22:46.241 "dma_device_type": 1 00:22:46.241 }, 00:22:46.241 { 00:22:46.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.241 "dma_device_type": 2 00:22:46.241 } 00:22:46.241 ], 00:22:46.241 "driver_specific": { 00:22:46.241 "raid": { 00:22:46.241 "uuid": "10dfec82-c743-43ed-9d91-de0f5d7099f7", 00:22:46.241 "strip_size_kb": 0, 00:22:46.241 "state": "online", 00:22:46.241 "raid_level": "raid1", 00:22:46.241 "superblock": true, 00:22:46.241 "num_base_bdevs": 2, 00:22:46.241 "num_base_bdevs_discovered": 2, 00:22:46.241 "num_base_bdevs_operational": 2, 00:22:46.241 "base_bdevs_list": [ 00:22:46.241 { 00:22:46.241 "name": "pt1", 00:22:46.241 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:46.241 "is_configured": true, 00:22:46.241 "data_offset": 256, 00:22:46.241 "data_size": 7936 00:22:46.241 }, 00:22:46.241 { 00:22:46.241 "name": "pt2", 00:22:46.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:46.241 "is_configured": true, 00:22:46.241 "data_offset": 256, 00:22:46.241 "data_size": 7936 00:22:46.241 } 00:22:46.241 ] 00:22:46.241 } 00:22:46.241 } 00:22:46.241 }' 00:22:46.241 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:46.501 pt2' 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.501 [2024-11-27 14:22:17.373280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=10dfec82-c743-43ed-9d91-de0f5d7099f7 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 10dfec82-c743-43ed-9d91-de0f5d7099f7 ']' 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.501 [2024-11-27 14:22:17.416889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:46.501 [2024-11-27 14:22:17.416951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:46.501 [2024-11-27 14:22:17.417060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:46.501 [2024-11-27 14:22:17.417147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:46.501 [2024-11-27 14:22:17.417200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:46.501 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:46.761 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.762 [2024-11-27 14:22:17.532717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:46.762 [2024-11-27 14:22:17.534585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:46.762 [2024-11-27 14:22:17.534699] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:46.762 [2024-11-27 14:22:17.534790] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:46.762 [2024-11-27 14:22:17.534838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:46.762 [2024-11-27 14:22:17.534865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:46.762 request: 00:22:46.762 { 00:22:46.762 "name": "raid_bdev1", 00:22:46.762 "raid_level": "raid1", 00:22:46.762 "base_bdevs": [ 00:22:46.762 "malloc1", 00:22:46.762 "malloc2" 00:22:46.762 ], 00:22:46.762 "superblock": false, 00:22:46.762 "method": "bdev_raid_create", 00:22:46.762 "req_id": 1 00:22:46.762 } 00:22:46.762 Got JSON-RPC error response 00:22:46.762 response: 00:22:46.762 { 00:22:46.762 "code": -17, 00:22:46.762 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:46.762 } 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.762 [2024-11-27 14:22:17.596567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:46.762 [2024-11-27 14:22:17.596616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.762 [2024-11-27 14:22:17.596631] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:46.762 [2024-11-27 14:22:17.596642] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.762 [2024-11-27 14:22:17.598407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.762 [2024-11-27 14:22:17.598444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:46.762 [2024-11-27 14:22:17.598489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:46.762 [2024-11-27 14:22:17.598549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:46.762 pt1 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:46.762 "name": "raid_bdev1", 00:22:46.762 "uuid": "10dfec82-c743-43ed-9d91-de0f5d7099f7", 00:22:46.762 "strip_size_kb": 0, 00:22:46.762 "state": "configuring", 00:22:46.762 "raid_level": "raid1", 00:22:46.762 "superblock": true, 00:22:46.762 "num_base_bdevs": 2, 00:22:46.762 "num_base_bdevs_discovered": 1, 00:22:46.762 "num_base_bdevs_operational": 2, 00:22:46.762 "base_bdevs_list": [ 00:22:46.762 { 00:22:46.762 "name": "pt1", 00:22:46.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:46.762 "is_configured": true, 00:22:46.762 "data_offset": 256, 00:22:46.762 "data_size": 7936 00:22:46.762 }, 00:22:46.762 { 00:22:46.762 "name": null, 00:22:46.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:46.762 "is_configured": false, 00:22:46.762 "data_offset": 256, 00:22:46.762 "data_size": 7936 00:22:46.762 } 00:22:46.762 ] 00:22:46.762 }' 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:46.762 14:22:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.333 [2024-11-27 14:22:18.087880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:47.333 [2024-11-27 14:22:18.088023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.333 [2024-11-27 14:22:18.088065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:47.333 [2024-11-27 14:22:18.088096] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.333 [2024-11-27 14:22:18.088302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.333 [2024-11-27 14:22:18.088353] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:47.333 [2024-11-27 14:22:18.088427] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:47.333 [2024-11-27 14:22:18.088477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:47.333 [2024-11-27 14:22:18.088591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:47.333 [2024-11-27 14:22:18.088630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:47.333 [2024-11-27 14:22:18.088726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:47.333 [2024-11-27 14:22:18.088832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:47.333 [2024-11-27 14:22:18.088866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:47.333 [2024-11-27 14:22:18.088970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.333 pt2 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.333 "name": "raid_bdev1", 00:22:47.333 "uuid": "10dfec82-c743-43ed-9d91-de0f5d7099f7", 00:22:47.333 "strip_size_kb": 0, 00:22:47.333 "state": "online", 00:22:47.333 "raid_level": "raid1", 00:22:47.333 "superblock": true, 00:22:47.333 "num_base_bdevs": 2, 00:22:47.333 "num_base_bdevs_discovered": 2, 00:22:47.333 "num_base_bdevs_operational": 2, 00:22:47.333 "base_bdevs_list": [ 00:22:47.333 { 00:22:47.333 "name": "pt1", 00:22:47.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:47.333 "is_configured": true, 00:22:47.333 "data_offset": 256, 00:22:47.333 "data_size": 7936 00:22:47.333 }, 00:22:47.333 { 00:22:47.333 "name": "pt2", 00:22:47.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:47.333 "is_configured": true, 00:22:47.333 "data_offset": 256, 00:22:47.333 "data_size": 7936 00:22:47.333 } 00:22:47.333 ] 00:22:47.333 }' 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.333 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.594 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:47.594 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:47.594 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:47.594 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:47.594 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:47.594 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:47.594 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:47.594 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:47.594 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.594 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.594 [2024-11-27 14:22:18.535405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:47.853 "name": "raid_bdev1", 00:22:47.853 "aliases": [ 00:22:47.853 "10dfec82-c743-43ed-9d91-de0f5d7099f7" 00:22:47.853 ], 00:22:47.853 "product_name": "Raid Volume", 00:22:47.853 "block_size": 4128, 00:22:47.853 "num_blocks": 7936, 00:22:47.853 "uuid": "10dfec82-c743-43ed-9d91-de0f5d7099f7", 00:22:47.853 "md_size": 32, 00:22:47.853 "md_interleave": true, 00:22:47.853 "dif_type": 0, 00:22:47.853 "assigned_rate_limits": { 00:22:47.853 "rw_ios_per_sec": 0, 00:22:47.853 "rw_mbytes_per_sec": 0, 00:22:47.853 "r_mbytes_per_sec": 0, 00:22:47.853 "w_mbytes_per_sec": 0 00:22:47.853 }, 00:22:47.853 "claimed": false, 00:22:47.853 "zoned": false, 00:22:47.853 "supported_io_types": { 00:22:47.853 "read": true, 00:22:47.853 "write": true, 00:22:47.853 "unmap": false, 00:22:47.853 "flush": false, 00:22:47.853 "reset": true, 00:22:47.853 "nvme_admin": false, 00:22:47.853 "nvme_io": false, 00:22:47.853 "nvme_io_md": false, 00:22:47.853 "write_zeroes": true, 00:22:47.853 "zcopy": false, 00:22:47.853 "get_zone_info": false, 00:22:47.853 "zone_management": false, 00:22:47.853 "zone_append": false, 00:22:47.853 "compare": false, 00:22:47.853 "compare_and_write": false, 00:22:47.853 "abort": false, 00:22:47.853 "seek_hole": false, 00:22:47.853 "seek_data": false, 00:22:47.853 "copy": false, 00:22:47.853 "nvme_iov_md": false 00:22:47.853 }, 00:22:47.853 "memory_domains": [ 00:22:47.853 { 00:22:47.853 "dma_device_id": "system", 00:22:47.853 "dma_device_type": 1 00:22:47.853 }, 00:22:47.853 { 00:22:47.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.853 "dma_device_type": 2 00:22:47.853 }, 00:22:47.853 { 00:22:47.853 "dma_device_id": "system", 00:22:47.853 "dma_device_type": 1 00:22:47.853 }, 00:22:47.853 { 00:22:47.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.853 "dma_device_type": 2 00:22:47.853 } 00:22:47.853 ], 00:22:47.853 "driver_specific": { 00:22:47.853 "raid": { 00:22:47.853 "uuid": "10dfec82-c743-43ed-9d91-de0f5d7099f7", 00:22:47.853 "strip_size_kb": 0, 00:22:47.853 "state": "online", 00:22:47.853 "raid_level": "raid1", 00:22:47.853 "superblock": true, 00:22:47.853 "num_base_bdevs": 2, 00:22:47.853 "num_base_bdevs_discovered": 2, 00:22:47.853 "num_base_bdevs_operational": 2, 00:22:47.853 "base_bdevs_list": [ 00:22:47.853 { 00:22:47.853 "name": "pt1", 00:22:47.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:47.853 "is_configured": true, 00:22:47.853 "data_offset": 256, 00:22:47.853 "data_size": 7936 00:22:47.853 }, 00:22:47.853 { 00:22:47.853 "name": "pt2", 00:22:47.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:47.853 "is_configured": true, 00:22:47.853 "data_offset": 256, 00:22:47.853 "data_size": 7936 00:22:47.853 } 00:22:47.853 ] 00:22:47.853 } 00:22:47.853 } 00:22:47.853 }' 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:47.853 pt2' 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:47.853 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:47.854 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.854 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.854 [2024-11-27 14:22:18.782937] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:47.854 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 10dfec82-c743-43ed-9d91-de0f5d7099f7 '!=' 10dfec82-c743-43ed-9d91-de0f5d7099f7 ']' 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.113 [2024-11-27 14:22:18.830648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.113 "name": "raid_bdev1", 00:22:48.113 "uuid": "10dfec82-c743-43ed-9d91-de0f5d7099f7", 00:22:48.113 "strip_size_kb": 0, 00:22:48.113 "state": "online", 00:22:48.113 "raid_level": "raid1", 00:22:48.113 "superblock": true, 00:22:48.113 "num_base_bdevs": 2, 00:22:48.113 "num_base_bdevs_discovered": 1, 00:22:48.113 "num_base_bdevs_operational": 1, 00:22:48.113 "base_bdevs_list": [ 00:22:48.113 { 00:22:48.113 "name": null, 00:22:48.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.113 "is_configured": false, 00:22:48.113 "data_offset": 0, 00:22:48.113 "data_size": 7936 00:22:48.113 }, 00:22:48.113 { 00:22:48.113 "name": "pt2", 00:22:48.113 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:48.113 "is_configured": true, 00:22:48.113 "data_offset": 256, 00:22:48.113 "data_size": 7936 00:22:48.113 } 00:22:48.113 ] 00:22:48.113 }' 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.113 14:22:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.374 [2024-11-27 14:22:19.229951] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:48.374 [2024-11-27 14:22:19.230036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:48.374 [2024-11-27 14:22:19.230140] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:48.374 [2024-11-27 14:22:19.230204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:48.374 [2024-11-27 14:22:19.230274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.374 [2024-11-27 14:22:19.305814] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:48.374 [2024-11-27 14:22:19.305921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.374 [2024-11-27 14:22:19.305953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:48.374 [2024-11-27 14:22:19.305982] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.374 [2024-11-27 14:22:19.307914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.374 [2024-11-27 14:22:19.307991] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:48.374 [2024-11-27 14:22:19.308065] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:48.374 [2024-11-27 14:22:19.308144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:48.374 [2024-11-27 14:22:19.308246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:48.374 [2024-11-27 14:22:19.308286] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:48.374 [2024-11-27 14:22:19.308410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:48.374 [2024-11-27 14:22:19.308516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:48.374 [2024-11-27 14:22:19.308551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:48.374 [2024-11-27 14:22:19.308652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.374 pt2 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.374 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.634 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.634 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.634 "name": "raid_bdev1", 00:22:48.634 "uuid": "10dfec82-c743-43ed-9d91-de0f5d7099f7", 00:22:48.634 "strip_size_kb": 0, 00:22:48.634 "state": "online", 00:22:48.634 "raid_level": "raid1", 00:22:48.634 "superblock": true, 00:22:48.634 "num_base_bdevs": 2, 00:22:48.634 "num_base_bdevs_discovered": 1, 00:22:48.634 "num_base_bdevs_operational": 1, 00:22:48.634 "base_bdevs_list": [ 00:22:48.634 { 00:22:48.634 "name": null, 00:22:48.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.634 "is_configured": false, 00:22:48.634 "data_offset": 256, 00:22:48.634 "data_size": 7936 00:22:48.634 }, 00:22:48.634 { 00:22:48.634 "name": "pt2", 00:22:48.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:48.634 "is_configured": true, 00:22:48.634 "data_offset": 256, 00:22:48.634 "data_size": 7936 00:22:48.634 } 00:22:48.634 ] 00:22:48.634 }' 00:22:48.634 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.634 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.901 [2024-11-27 14:22:19.705151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:48.901 [2024-11-27 14:22:19.705240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:48.901 [2024-11-27 14:22:19.705322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:48.901 [2024-11-27 14:22:19.705375] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:48.901 [2024-11-27 14:22:19.705384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.901 [2024-11-27 14:22:19.765061] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:48.901 [2024-11-27 14:22:19.765176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.901 [2024-11-27 14:22:19.765202] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:48.901 [2024-11-27 14:22:19.765212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.901 [2024-11-27 14:22:19.767152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.901 [2024-11-27 14:22:19.767188] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:48.901 [2024-11-27 14:22:19.767243] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:48.901 [2024-11-27 14:22:19.767313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:48.901 [2024-11-27 14:22:19.767419] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:48.901 [2024-11-27 14:22:19.767429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:48.901 [2024-11-27 14:22:19.767448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:48.901 [2024-11-27 14:22:19.767504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:48.901 [2024-11-27 14:22:19.767574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:48.901 [2024-11-27 14:22:19.767582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:48.901 [2024-11-27 14:22:19.767650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:48.901 [2024-11-27 14:22:19.767719] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:48.901 [2024-11-27 14:22:19.767728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:48.901 [2024-11-27 14:22:19.767794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.901 pt1 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.901 "name": "raid_bdev1", 00:22:48.901 "uuid": "10dfec82-c743-43ed-9d91-de0f5d7099f7", 00:22:48.901 "strip_size_kb": 0, 00:22:48.901 "state": "online", 00:22:48.901 "raid_level": "raid1", 00:22:48.901 "superblock": true, 00:22:48.901 "num_base_bdevs": 2, 00:22:48.901 "num_base_bdevs_discovered": 1, 00:22:48.901 "num_base_bdevs_operational": 1, 00:22:48.901 "base_bdevs_list": [ 00:22:48.901 { 00:22:48.901 "name": null, 00:22:48.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.901 "is_configured": false, 00:22:48.901 "data_offset": 256, 00:22:48.901 "data_size": 7936 00:22:48.901 }, 00:22:48.901 { 00:22:48.901 "name": "pt2", 00:22:48.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:48.901 "is_configured": true, 00:22:48.901 "data_offset": 256, 00:22:48.901 "data_size": 7936 00:22:48.901 } 00:22:48.901 ] 00:22:48.901 }' 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.901 14:22:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:49.480 [2024-11-27 14:22:20.260460] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 10dfec82-c743-43ed-9d91-de0f5d7099f7 '!=' 10dfec82-c743-43ed-9d91-de0f5d7099f7 ']' 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88965 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88965 ']' 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88965 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88965 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88965' 00:22:49.480 killing process with pid 88965 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88965 00:22:49.480 [2024-11-27 14:22:20.348745] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:49.480 [2024-11-27 14:22:20.348885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:49.480 14:22:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88965 00:22:49.480 [2024-11-27 14:22:20.348963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:49.480 [2024-11-27 14:22:20.348981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:49.739 [2024-11-27 14:22:20.564652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:51.120 14:22:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:22:51.120 00:22:51.120 real 0m6.043s 00:22:51.120 user 0m9.117s 00:22:51.120 sys 0m1.141s 00:22:51.120 14:22:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.120 14:22:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.120 ************************************ 00:22:51.120 END TEST raid_superblock_test_md_interleaved 00:22:51.120 ************************************ 00:22:51.120 14:22:21 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:22:51.120 14:22:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:51.120 14:22:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.120 14:22:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:51.120 ************************************ 00:22:51.120 START TEST raid_rebuild_test_sb_md_interleaved 00:22:51.120 ************************************ 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:51.120 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89290 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89290 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89290 ']' 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.121 14:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.121 [2024-11-27 14:22:21.836560] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:22:51.121 [2024-11-27 14:22:21.836756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89290 ] 00:22:51.121 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:51.121 Zero copy mechanism will not be used. 00:22:51.121 [2024-11-27 14:22:22.011691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.381 [2024-11-27 14:22:22.128402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.381 [2024-11-27 14:22:22.330504] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:51.381 [2024-11-27 14:22:22.330536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.951 BaseBdev1_malloc 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.951 [2024-11-27 14:22:22.741508] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:51.951 [2024-11-27 14:22:22.741566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.951 [2024-11-27 14:22:22.741588] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:51.951 [2024-11-27 14:22:22.741599] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.951 [2024-11-27 14:22:22.743372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.951 [2024-11-27 14:22:22.743481] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:51.951 BaseBdev1 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.951 BaseBdev2_malloc 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.951 [2024-11-27 14:22:22.791401] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:51.951 [2024-11-27 14:22:22.791459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.951 [2024-11-27 14:22:22.791479] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:51.951 [2024-11-27 14:22:22.791492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.951 [2024-11-27 14:22:22.793310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.951 [2024-11-27 14:22:22.793347] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:51.951 BaseBdev2 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.951 spare_malloc 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.951 spare_delay 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.951 [2024-11-27 14:22:22.860838] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:51.951 [2024-11-27 14:22:22.860939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.951 [2024-11-27 14:22:22.860965] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:51.951 [2024-11-27 14:22:22.860977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.951 [2024-11-27 14:22:22.862769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.951 [2024-11-27 14:22:22.862814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:51.951 spare 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.951 [2024-11-27 14:22:22.868861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:51.951 [2024-11-27 14:22:22.870573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:51.951 [2024-11-27 14:22:22.870753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:51.951 [2024-11-27 14:22:22.870768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:51.951 [2024-11-27 14:22:22.870836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:51.951 [2024-11-27 14:22:22.870901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:51.951 [2024-11-27 14:22:22.870909] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:51.951 [2024-11-27 14:22:22.870973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.951 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.952 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.952 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.952 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.952 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.952 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.952 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.212 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.212 "name": "raid_bdev1", 00:22:52.212 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:52.212 "strip_size_kb": 0, 00:22:52.212 "state": "online", 00:22:52.212 "raid_level": "raid1", 00:22:52.212 "superblock": true, 00:22:52.212 "num_base_bdevs": 2, 00:22:52.212 "num_base_bdevs_discovered": 2, 00:22:52.212 "num_base_bdevs_operational": 2, 00:22:52.212 "base_bdevs_list": [ 00:22:52.212 { 00:22:52.212 "name": "BaseBdev1", 00:22:52.212 "uuid": "5dda8c5a-4dc5-59ad-a275-f797a88d8baf", 00:22:52.212 "is_configured": true, 00:22:52.212 "data_offset": 256, 00:22:52.212 "data_size": 7936 00:22:52.212 }, 00:22:52.212 { 00:22:52.212 "name": "BaseBdev2", 00:22:52.212 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:52.212 "is_configured": true, 00:22:52.212 "data_offset": 256, 00:22:52.212 "data_size": 7936 00:22:52.212 } 00:22:52.212 ] 00:22:52.212 }' 00:22:52.212 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.212 14:22:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.472 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:52.472 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.472 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.472 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:52.472 [2024-11-27 14:22:23.368343] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:52.472 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.472 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:52.472 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:52.472 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.472 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.472 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.731 [2024-11-27 14:22:23.463857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.731 "name": "raid_bdev1", 00:22:52.731 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:52.731 "strip_size_kb": 0, 00:22:52.731 "state": "online", 00:22:52.731 "raid_level": "raid1", 00:22:52.731 "superblock": true, 00:22:52.731 "num_base_bdevs": 2, 00:22:52.731 "num_base_bdevs_discovered": 1, 00:22:52.731 "num_base_bdevs_operational": 1, 00:22:52.731 "base_bdevs_list": [ 00:22:52.731 { 00:22:52.731 "name": null, 00:22:52.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.731 "is_configured": false, 00:22:52.731 "data_offset": 0, 00:22:52.731 "data_size": 7936 00:22:52.731 }, 00:22:52.731 { 00:22:52.731 "name": "BaseBdev2", 00:22:52.731 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:52.731 "is_configured": true, 00:22:52.731 "data_offset": 256, 00:22:52.731 "data_size": 7936 00:22:52.731 } 00:22:52.731 ] 00:22:52.731 }' 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.731 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.990 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:52.990 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.990 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.990 [2024-11-27 14:22:23.835275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:52.990 [2024-11-27 14:22:23.851230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:52.990 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.990 14:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:52.990 [2024-11-27 14:22:23.853062] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.926 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.926 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.926 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:53.926 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:53.926 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.926 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.926 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.926 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.926 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.186 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.186 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:54.186 "name": "raid_bdev1", 00:22:54.186 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:54.186 "strip_size_kb": 0, 00:22:54.186 "state": "online", 00:22:54.186 "raid_level": "raid1", 00:22:54.186 "superblock": true, 00:22:54.186 "num_base_bdevs": 2, 00:22:54.186 "num_base_bdevs_discovered": 2, 00:22:54.186 "num_base_bdevs_operational": 2, 00:22:54.186 "process": { 00:22:54.186 "type": "rebuild", 00:22:54.186 "target": "spare", 00:22:54.186 "progress": { 00:22:54.186 "blocks": 2560, 00:22:54.186 "percent": 32 00:22:54.186 } 00:22:54.186 }, 00:22:54.186 "base_bdevs_list": [ 00:22:54.186 { 00:22:54.186 "name": "spare", 00:22:54.186 "uuid": "6de4b584-a95f-5d4b-a527-a5ef793209d9", 00:22:54.186 "is_configured": true, 00:22:54.186 "data_offset": 256, 00:22:54.186 "data_size": 7936 00:22:54.186 }, 00:22:54.186 { 00:22:54.186 "name": "BaseBdev2", 00:22:54.186 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:54.186 "is_configured": true, 00:22:54.186 "data_offset": 256, 00:22:54.186 "data_size": 7936 00:22:54.186 } 00:22:54.186 ] 00:22:54.186 }' 00:22:54.186 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.186 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.186 14:22:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.186 [2024-11-27 14:22:25.017104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.186 [2024-11-27 14:22:25.058831] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:54.186 [2024-11-27 14:22:25.058948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.186 [2024-11-27 14:22:25.058966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.186 [2024-11-27 14:22:25.058979] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.186 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.445 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:54.445 "name": "raid_bdev1", 00:22:54.445 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:54.445 "strip_size_kb": 0, 00:22:54.445 "state": "online", 00:22:54.445 "raid_level": "raid1", 00:22:54.445 "superblock": true, 00:22:54.445 "num_base_bdevs": 2, 00:22:54.445 "num_base_bdevs_discovered": 1, 00:22:54.445 "num_base_bdevs_operational": 1, 00:22:54.445 "base_bdevs_list": [ 00:22:54.445 { 00:22:54.445 "name": null, 00:22:54.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.445 "is_configured": false, 00:22:54.445 "data_offset": 0, 00:22:54.445 "data_size": 7936 00:22:54.445 }, 00:22:54.445 { 00:22:54.445 "name": "BaseBdev2", 00:22:54.445 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:54.445 "is_configured": true, 00:22:54.445 "data_offset": 256, 00:22:54.445 "data_size": 7936 00:22:54.445 } 00:22:54.445 ] 00:22:54.445 }' 00:22:54.445 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:54.446 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:54.704 "name": "raid_bdev1", 00:22:54.704 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:54.704 "strip_size_kb": 0, 00:22:54.704 "state": "online", 00:22:54.704 "raid_level": "raid1", 00:22:54.704 "superblock": true, 00:22:54.704 "num_base_bdevs": 2, 00:22:54.704 "num_base_bdevs_discovered": 1, 00:22:54.704 "num_base_bdevs_operational": 1, 00:22:54.704 "base_bdevs_list": [ 00:22:54.704 { 00:22:54.704 "name": null, 00:22:54.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.704 "is_configured": false, 00:22:54.704 "data_offset": 0, 00:22:54.704 "data_size": 7936 00:22:54.704 }, 00:22:54.704 { 00:22:54.704 "name": "BaseBdev2", 00:22:54.704 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:54.704 "is_configured": true, 00:22:54.704 "data_offset": 256, 00:22:54.704 "data_size": 7936 00:22:54.704 } 00:22:54.704 ] 00:22:54.704 }' 00:22:54.704 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:54.963 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:54.963 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.963 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:54.963 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:54.963 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.963 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.963 [2024-11-27 14:22:25.736734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:54.963 [2024-11-27 14:22:25.752733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:54.963 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.963 14:22:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:54.963 [2024-11-27 14:22:25.754519] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:55.900 "name": "raid_bdev1", 00:22:55.900 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:55.900 "strip_size_kb": 0, 00:22:55.900 "state": "online", 00:22:55.900 "raid_level": "raid1", 00:22:55.900 "superblock": true, 00:22:55.900 "num_base_bdevs": 2, 00:22:55.900 "num_base_bdevs_discovered": 2, 00:22:55.900 "num_base_bdevs_operational": 2, 00:22:55.900 "process": { 00:22:55.900 "type": "rebuild", 00:22:55.900 "target": "spare", 00:22:55.900 "progress": { 00:22:55.900 "blocks": 2560, 00:22:55.900 "percent": 32 00:22:55.900 } 00:22:55.900 }, 00:22:55.900 "base_bdevs_list": [ 00:22:55.900 { 00:22:55.900 "name": "spare", 00:22:55.900 "uuid": "6de4b584-a95f-5d4b-a527-a5ef793209d9", 00:22:55.900 "is_configured": true, 00:22:55.900 "data_offset": 256, 00:22:55.900 "data_size": 7936 00:22:55.900 }, 00:22:55.900 { 00:22:55.900 "name": "BaseBdev2", 00:22:55.900 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:55.900 "is_configured": true, 00:22:55.900 "data_offset": 256, 00:22:55.900 "data_size": 7936 00:22:55.900 } 00:22:55.900 ] 00:22:55.900 }' 00:22:55.900 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:56.166 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=754 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:56.166 "name": "raid_bdev1", 00:22:56.166 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:56.166 "strip_size_kb": 0, 00:22:56.166 "state": "online", 00:22:56.166 "raid_level": "raid1", 00:22:56.166 "superblock": true, 00:22:56.166 "num_base_bdevs": 2, 00:22:56.166 "num_base_bdevs_discovered": 2, 00:22:56.166 "num_base_bdevs_operational": 2, 00:22:56.166 "process": { 00:22:56.166 "type": "rebuild", 00:22:56.166 "target": "spare", 00:22:56.166 "progress": { 00:22:56.166 "blocks": 2816, 00:22:56.166 "percent": 35 00:22:56.166 } 00:22:56.166 }, 00:22:56.166 "base_bdevs_list": [ 00:22:56.166 { 00:22:56.166 "name": "spare", 00:22:56.166 "uuid": "6de4b584-a95f-5d4b-a527-a5ef793209d9", 00:22:56.166 "is_configured": true, 00:22:56.166 "data_offset": 256, 00:22:56.166 "data_size": 7936 00:22:56.166 }, 00:22:56.166 { 00:22:56.166 "name": "BaseBdev2", 00:22:56.166 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:56.166 "is_configured": true, 00:22:56.166 "data_offset": 256, 00:22:56.166 "data_size": 7936 00:22:56.166 } 00:22:56.166 ] 00:22:56.166 }' 00:22:56.166 14:22:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:56.166 14:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:56.166 14:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:56.166 14:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.166 14:22:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:57.117 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:57.117 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.117 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.117 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:57.117 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:57.117 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.117 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.117 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.117 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.117 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:57.376 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.376 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.376 "name": "raid_bdev1", 00:22:57.376 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:57.376 "strip_size_kb": 0, 00:22:57.376 "state": "online", 00:22:57.376 "raid_level": "raid1", 00:22:57.376 "superblock": true, 00:22:57.376 "num_base_bdevs": 2, 00:22:57.376 "num_base_bdevs_discovered": 2, 00:22:57.376 "num_base_bdevs_operational": 2, 00:22:57.376 "process": { 00:22:57.376 "type": "rebuild", 00:22:57.376 "target": "spare", 00:22:57.376 "progress": { 00:22:57.376 "blocks": 5632, 00:22:57.376 "percent": 70 00:22:57.376 } 00:22:57.376 }, 00:22:57.376 "base_bdevs_list": [ 00:22:57.376 { 00:22:57.376 "name": "spare", 00:22:57.376 "uuid": "6de4b584-a95f-5d4b-a527-a5ef793209d9", 00:22:57.376 "is_configured": true, 00:22:57.376 "data_offset": 256, 00:22:57.376 "data_size": 7936 00:22:57.376 }, 00:22:57.376 { 00:22:57.376 "name": "BaseBdev2", 00:22:57.376 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:57.376 "is_configured": true, 00:22:57.376 "data_offset": 256, 00:22:57.376 "data_size": 7936 00:22:57.376 } 00:22:57.376 ] 00:22:57.376 }' 00:22:57.376 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.376 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.376 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.376 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.376 14:22:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:57.943 [2024-11-27 14:22:28.868755] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:57.943 [2024-11-27 14:22:28.868948] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:57.943 [2024-11-27 14:22:28.869112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.511 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:58.511 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:58.512 "name": "raid_bdev1", 00:22:58.512 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:58.512 "strip_size_kb": 0, 00:22:58.512 "state": "online", 00:22:58.512 "raid_level": "raid1", 00:22:58.512 "superblock": true, 00:22:58.512 "num_base_bdevs": 2, 00:22:58.512 "num_base_bdevs_discovered": 2, 00:22:58.512 "num_base_bdevs_operational": 2, 00:22:58.512 "base_bdevs_list": [ 00:22:58.512 { 00:22:58.512 "name": "spare", 00:22:58.512 "uuid": "6de4b584-a95f-5d4b-a527-a5ef793209d9", 00:22:58.512 "is_configured": true, 00:22:58.512 "data_offset": 256, 00:22:58.512 "data_size": 7936 00:22:58.512 }, 00:22:58.512 { 00:22:58.512 "name": "BaseBdev2", 00:22:58.512 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:58.512 "is_configured": true, 00:22:58.512 "data_offset": 256, 00:22:58.512 "data_size": 7936 00:22:58.512 } 00:22:58.512 ] 00:22:58.512 }' 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:58.512 "name": "raid_bdev1", 00:22:58.512 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:58.512 "strip_size_kb": 0, 00:22:58.512 "state": "online", 00:22:58.512 "raid_level": "raid1", 00:22:58.512 "superblock": true, 00:22:58.512 "num_base_bdevs": 2, 00:22:58.512 "num_base_bdevs_discovered": 2, 00:22:58.512 "num_base_bdevs_operational": 2, 00:22:58.512 "base_bdevs_list": [ 00:22:58.512 { 00:22:58.512 "name": "spare", 00:22:58.512 "uuid": "6de4b584-a95f-5d4b-a527-a5ef793209d9", 00:22:58.512 "is_configured": true, 00:22:58.512 "data_offset": 256, 00:22:58.512 "data_size": 7936 00:22:58.512 }, 00:22:58.512 { 00:22:58.512 "name": "BaseBdev2", 00:22:58.512 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:58.512 "is_configured": true, 00:22:58.512 "data_offset": 256, 00:22:58.512 "data_size": 7936 00:22:58.512 } 00:22:58.512 ] 00:22:58.512 }' 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:58.512 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:58.770 "name": "raid_bdev1", 00:22:58.770 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:58.770 "strip_size_kb": 0, 00:22:58.770 "state": "online", 00:22:58.770 "raid_level": "raid1", 00:22:58.770 "superblock": true, 00:22:58.770 "num_base_bdevs": 2, 00:22:58.770 "num_base_bdevs_discovered": 2, 00:22:58.770 "num_base_bdevs_operational": 2, 00:22:58.770 "base_bdevs_list": [ 00:22:58.770 { 00:22:58.770 "name": "spare", 00:22:58.770 "uuid": "6de4b584-a95f-5d4b-a527-a5ef793209d9", 00:22:58.770 "is_configured": true, 00:22:58.770 "data_offset": 256, 00:22:58.770 "data_size": 7936 00:22:58.770 }, 00:22:58.770 { 00:22:58.770 "name": "BaseBdev2", 00:22:58.770 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:58.770 "is_configured": true, 00:22:58.770 "data_offset": 256, 00:22:58.770 "data_size": 7936 00:22:58.770 } 00:22:58.770 ] 00:22:58.770 }' 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:58.770 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.028 [2024-11-27 14:22:29.887148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:59.028 [2024-11-27 14:22:29.887177] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:59.028 [2024-11-27 14:22:29.887272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:59.028 [2024-11-27 14:22:29.887339] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:59.028 [2024-11-27 14:22:29.887351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.028 [2024-11-27 14:22:29.942999] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:59.028 [2024-11-27 14:22:29.943092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.028 [2024-11-27 14:22:29.943139] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:59.028 [2024-11-27 14:22:29.943169] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.028 [2024-11-27 14:22:29.945088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.028 [2024-11-27 14:22:29.945171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:59.028 [2024-11-27 14:22:29.945251] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:59.028 [2024-11-27 14:22:29.945314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:59.028 [2024-11-27 14:22:29.945465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:59.028 spare 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.028 14:22:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.286 [2024-11-27 14:22:30.045415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:59.286 [2024-11-27 14:22:30.045488] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:59.286 [2024-11-27 14:22:30.045604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:59.286 [2024-11-27 14:22:30.045694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:59.286 [2024-11-27 14:22:30.045704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:59.286 [2024-11-27 14:22:30.045785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.286 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.286 "name": "raid_bdev1", 00:22:59.286 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:59.286 "strip_size_kb": 0, 00:22:59.286 "state": "online", 00:22:59.286 "raid_level": "raid1", 00:22:59.286 "superblock": true, 00:22:59.286 "num_base_bdevs": 2, 00:22:59.286 "num_base_bdevs_discovered": 2, 00:22:59.286 "num_base_bdevs_operational": 2, 00:22:59.286 "base_bdevs_list": [ 00:22:59.286 { 00:22:59.286 "name": "spare", 00:22:59.287 "uuid": "6de4b584-a95f-5d4b-a527-a5ef793209d9", 00:22:59.287 "is_configured": true, 00:22:59.287 "data_offset": 256, 00:22:59.287 "data_size": 7936 00:22:59.287 }, 00:22:59.287 { 00:22:59.287 "name": "BaseBdev2", 00:22:59.287 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:59.287 "is_configured": true, 00:22:59.287 "data_offset": 256, 00:22:59.287 "data_size": 7936 00:22:59.287 } 00:22:59.287 ] 00:22:59.287 }' 00:22:59.287 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.287 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:59.546 "name": "raid_bdev1", 00:22:59.546 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:59.546 "strip_size_kb": 0, 00:22:59.546 "state": "online", 00:22:59.546 "raid_level": "raid1", 00:22:59.546 "superblock": true, 00:22:59.546 "num_base_bdevs": 2, 00:22:59.546 "num_base_bdevs_discovered": 2, 00:22:59.546 "num_base_bdevs_operational": 2, 00:22:59.546 "base_bdevs_list": [ 00:22:59.546 { 00:22:59.546 "name": "spare", 00:22:59.546 "uuid": "6de4b584-a95f-5d4b-a527-a5ef793209d9", 00:22:59.546 "is_configured": true, 00:22:59.546 "data_offset": 256, 00:22:59.546 "data_size": 7936 00:22:59.546 }, 00:22:59.546 { 00:22:59.546 "name": "BaseBdev2", 00:22:59.546 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:59.546 "is_configured": true, 00:22:59.546 "data_offset": 256, 00:22:59.546 "data_size": 7936 00:22:59.546 } 00:22:59.546 ] 00:22:59.546 }' 00:22:59.546 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.805 [2024-11-27 14:22:30.597961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:59.805 "name": "raid_bdev1", 00:22:59.805 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:22:59.805 "strip_size_kb": 0, 00:22:59.805 "state": "online", 00:22:59.805 "raid_level": "raid1", 00:22:59.805 "superblock": true, 00:22:59.805 "num_base_bdevs": 2, 00:22:59.805 "num_base_bdevs_discovered": 1, 00:22:59.805 "num_base_bdevs_operational": 1, 00:22:59.805 "base_bdevs_list": [ 00:22:59.805 { 00:22:59.805 "name": null, 00:22:59.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.805 "is_configured": false, 00:22:59.805 "data_offset": 0, 00:22:59.805 "data_size": 7936 00:22:59.805 }, 00:22:59.805 { 00:22:59.805 "name": "BaseBdev2", 00:22:59.805 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:22:59.805 "is_configured": true, 00:22:59.805 "data_offset": 256, 00:22:59.805 "data_size": 7936 00:22:59.805 } 00:22:59.805 ] 00:22:59.805 }' 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:59.805 14:22:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:00.372 14:22:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:00.372 14:22:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.372 14:22:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:00.372 [2024-11-27 14:22:31.077176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:00.372 [2024-11-27 14:22:31.077437] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:00.372 [2024-11-27 14:22:31.077501] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:00.372 [2024-11-27 14:22:31.077575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:00.372 [2024-11-27 14:22:31.093738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:00.372 14:22:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.372 14:22:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:00.372 [2024-11-27 14:22:31.095605] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.308 "name": "raid_bdev1", 00:23:01.308 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:23:01.308 "strip_size_kb": 0, 00:23:01.308 "state": "online", 00:23:01.308 "raid_level": "raid1", 00:23:01.308 "superblock": true, 00:23:01.308 "num_base_bdevs": 2, 00:23:01.308 "num_base_bdevs_discovered": 2, 00:23:01.308 "num_base_bdevs_operational": 2, 00:23:01.308 "process": { 00:23:01.308 "type": "rebuild", 00:23:01.308 "target": "spare", 00:23:01.308 "progress": { 00:23:01.308 "blocks": 2560, 00:23:01.308 "percent": 32 00:23:01.308 } 00:23:01.308 }, 00:23:01.308 "base_bdevs_list": [ 00:23:01.308 { 00:23:01.308 "name": "spare", 00:23:01.308 "uuid": "6de4b584-a95f-5d4b-a527-a5ef793209d9", 00:23:01.308 "is_configured": true, 00:23:01.308 "data_offset": 256, 00:23:01.308 "data_size": 7936 00:23:01.308 }, 00:23:01.308 { 00:23:01.308 "name": "BaseBdev2", 00:23:01.308 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:23:01.308 "is_configured": true, 00:23:01.308 "data_offset": 256, 00:23:01.308 "data_size": 7936 00:23:01.308 } 00:23:01.308 ] 00:23:01.308 }' 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.308 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.308 [2024-11-27 14:22:32.239108] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:01.567 [2024-11-27 14:22:32.300586] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:01.567 [2024-11-27 14:22:32.300659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.567 [2024-11-27 14:22:32.300681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:01.567 [2024-11-27 14:22:32.300691] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.567 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.568 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.568 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.568 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.568 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.568 "name": "raid_bdev1", 00:23:01.568 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:23:01.568 "strip_size_kb": 0, 00:23:01.568 "state": "online", 00:23:01.568 "raid_level": "raid1", 00:23:01.568 "superblock": true, 00:23:01.568 "num_base_bdevs": 2, 00:23:01.568 "num_base_bdevs_discovered": 1, 00:23:01.568 "num_base_bdevs_operational": 1, 00:23:01.568 "base_bdevs_list": [ 00:23:01.568 { 00:23:01.568 "name": null, 00:23:01.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.568 "is_configured": false, 00:23:01.568 "data_offset": 0, 00:23:01.568 "data_size": 7936 00:23:01.568 }, 00:23:01.568 { 00:23:01.568 "name": "BaseBdev2", 00:23:01.568 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:23:01.568 "is_configured": true, 00:23:01.568 "data_offset": 256, 00:23:01.568 "data_size": 7936 00:23:01.568 } 00:23:01.568 ] 00:23:01.568 }' 00:23:01.568 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.568 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.827 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:01.827 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.827 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.827 [2024-11-27 14:22:32.775617] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:01.827 [2024-11-27 14:22:32.775735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.827 [2024-11-27 14:22:32.775782] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:01.827 [2024-11-27 14:22:32.775813] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.827 [2024-11-27 14:22:32.776056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.827 [2024-11-27 14:22:32.776107] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:01.827 [2024-11-27 14:22:32.776214] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:01.827 [2024-11-27 14:22:32.776256] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:01.827 [2024-11-27 14:22:32.776302] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:01.827 [2024-11-27 14:22:32.776348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:02.085 [2024-11-27 14:22:32.792361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:02.085 spare 00:23:02.085 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.085 14:22:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:02.085 [2024-11-27 14:22:32.794154] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.021 "name": "raid_bdev1", 00:23:03.021 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:23:03.021 "strip_size_kb": 0, 00:23:03.021 "state": "online", 00:23:03.021 "raid_level": "raid1", 00:23:03.021 "superblock": true, 00:23:03.021 "num_base_bdevs": 2, 00:23:03.021 "num_base_bdevs_discovered": 2, 00:23:03.021 "num_base_bdevs_operational": 2, 00:23:03.021 "process": { 00:23:03.021 "type": "rebuild", 00:23:03.021 "target": "spare", 00:23:03.021 "progress": { 00:23:03.021 "blocks": 2560, 00:23:03.021 "percent": 32 00:23:03.021 } 00:23:03.021 }, 00:23:03.021 "base_bdevs_list": [ 00:23:03.021 { 00:23:03.021 "name": "spare", 00:23:03.021 "uuid": "6de4b584-a95f-5d4b-a527-a5ef793209d9", 00:23:03.021 "is_configured": true, 00:23:03.021 "data_offset": 256, 00:23:03.021 "data_size": 7936 00:23:03.021 }, 00:23:03.021 { 00:23:03.021 "name": "BaseBdev2", 00:23:03.021 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:23:03.021 "is_configured": true, 00:23:03.021 "data_offset": 256, 00:23:03.021 "data_size": 7936 00:23:03.021 } 00:23:03.021 ] 00:23:03.021 }' 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.021 14:22:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.021 [2024-11-27 14:22:33.942152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.303 [2024-11-27 14:22:33.999060] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:03.303 [2024-11-27 14:22:33.999110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.303 [2024-11-27 14:22:33.999146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.303 [2024-11-27 14:22:33.999153] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.303 "name": "raid_bdev1", 00:23:03.303 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:23:03.303 "strip_size_kb": 0, 00:23:03.303 "state": "online", 00:23:03.303 "raid_level": "raid1", 00:23:03.303 "superblock": true, 00:23:03.303 "num_base_bdevs": 2, 00:23:03.303 "num_base_bdevs_discovered": 1, 00:23:03.303 "num_base_bdevs_operational": 1, 00:23:03.303 "base_bdevs_list": [ 00:23:03.303 { 00:23:03.303 "name": null, 00:23:03.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.303 "is_configured": false, 00:23:03.303 "data_offset": 0, 00:23:03.303 "data_size": 7936 00:23:03.303 }, 00:23:03.303 { 00:23:03.303 "name": "BaseBdev2", 00:23:03.303 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:23:03.303 "is_configured": true, 00:23:03.303 "data_offset": 256, 00:23:03.303 "data_size": 7936 00:23:03.303 } 00:23:03.303 ] 00:23:03.303 }' 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.303 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.569 "name": "raid_bdev1", 00:23:03.569 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:23:03.569 "strip_size_kb": 0, 00:23:03.569 "state": "online", 00:23:03.569 "raid_level": "raid1", 00:23:03.569 "superblock": true, 00:23:03.569 "num_base_bdevs": 2, 00:23:03.569 "num_base_bdevs_discovered": 1, 00:23:03.569 "num_base_bdevs_operational": 1, 00:23:03.569 "base_bdevs_list": [ 00:23:03.569 { 00:23:03.569 "name": null, 00:23:03.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.569 "is_configured": false, 00:23:03.569 "data_offset": 0, 00:23:03.569 "data_size": 7936 00:23:03.569 }, 00:23:03.569 { 00:23:03.569 "name": "BaseBdev2", 00:23:03.569 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:23:03.569 "is_configured": true, 00:23:03.569 "data_offset": 256, 00:23:03.569 "data_size": 7936 00:23:03.569 } 00:23:03.569 ] 00:23:03.569 }' 00:23:03.569 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.828 [2024-11-27 14:22:34.605342] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:03.828 [2024-11-27 14:22:34.605418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.828 [2024-11-27 14:22:34.605444] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:03.828 [2024-11-27 14:22:34.605453] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.828 [2024-11-27 14:22:34.605649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.828 [2024-11-27 14:22:34.605665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:03.828 [2024-11-27 14:22:34.605722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:03.828 [2024-11-27 14:22:34.605734] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:03.828 [2024-11-27 14:22:34.605743] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:03.828 [2024-11-27 14:22:34.605753] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:03.828 BaseBdev1 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.828 14:22:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.764 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.765 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.765 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.765 "name": "raid_bdev1", 00:23:04.765 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:23:04.765 "strip_size_kb": 0, 00:23:04.765 "state": "online", 00:23:04.765 "raid_level": "raid1", 00:23:04.765 "superblock": true, 00:23:04.765 "num_base_bdevs": 2, 00:23:04.765 "num_base_bdevs_discovered": 1, 00:23:04.765 "num_base_bdevs_operational": 1, 00:23:04.765 "base_bdevs_list": [ 00:23:04.765 { 00:23:04.765 "name": null, 00:23:04.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.765 "is_configured": false, 00:23:04.765 "data_offset": 0, 00:23:04.765 "data_size": 7936 00:23:04.765 }, 00:23:04.765 { 00:23:04.765 "name": "BaseBdev2", 00:23:04.765 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:23:04.765 "is_configured": true, 00:23:04.765 "data_offset": 256, 00:23:04.765 "data_size": 7936 00:23:04.765 } 00:23:04.765 ] 00:23:04.765 }' 00:23:04.765 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.765 14:22:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:05.334 "name": "raid_bdev1", 00:23:05.334 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:23:05.334 "strip_size_kb": 0, 00:23:05.334 "state": "online", 00:23:05.334 "raid_level": "raid1", 00:23:05.334 "superblock": true, 00:23:05.334 "num_base_bdevs": 2, 00:23:05.334 "num_base_bdevs_discovered": 1, 00:23:05.334 "num_base_bdevs_operational": 1, 00:23:05.334 "base_bdevs_list": [ 00:23:05.334 { 00:23:05.334 "name": null, 00:23:05.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.334 "is_configured": false, 00:23:05.334 "data_offset": 0, 00:23:05.334 "data_size": 7936 00:23:05.334 }, 00:23:05.334 { 00:23:05.334 "name": "BaseBdev2", 00:23:05.334 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:23:05.334 "is_configured": true, 00:23:05.334 "data_offset": 256, 00:23:05.334 "data_size": 7936 00:23:05.334 } 00:23:05.334 ] 00:23:05.334 }' 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:05.334 [2024-11-27 14:22:36.158763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:05.334 [2024-11-27 14:22:36.158936] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:05.334 [2024-11-27 14:22:36.158955] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:05.334 request: 00:23:05.334 { 00:23:05.334 "base_bdev": "BaseBdev1", 00:23:05.334 "raid_bdev": "raid_bdev1", 00:23:05.334 "method": "bdev_raid_add_base_bdev", 00:23:05.334 "req_id": 1 00:23:05.334 } 00:23:05.334 Got JSON-RPC error response 00:23:05.334 response: 00:23:05.334 { 00:23:05.334 "code": -22, 00:23:05.334 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:05.334 } 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:05.334 14:22:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.272 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.531 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.531 "name": "raid_bdev1", 00:23:06.531 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:23:06.531 "strip_size_kb": 0, 00:23:06.531 "state": "online", 00:23:06.531 "raid_level": "raid1", 00:23:06.531 "superblock": true, 00:23:06.531 "num_base_bdevs": 2, 00:23:06.531 "num_base_bdevs_discovered": 1, 00:23:06.531 "num_base_bdevs_operational": 1, 00:23:06.531 "base_bdevs_list": [ 00:23:06.531 { 00:23:06.531 "name": null, 00:23:06.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.531 "is_configured": false, 00:23:06.531 "data_offset": 0, 00:23:06.531 "data_size": 7936 00:23:06.531 }, 00:23:06.531 { 00:23:06.531 "name": "BaseBdev2", 00:23:06.531 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:23:06.531 "is_configured": true, 00:23:06.531 "data_offset": 256, 00:23:06.531 "data_size": 7936 00:23:06.531 } 00:23:06.531 ] 00:23:06.531 }' 00:23:06.531 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.531 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.790 "name": "raid_bdev1", 00:23:06.790 "uuid": "5d20c944-48e6-4a84-8a1d-e39d3e7292f0", 00:23:06.790 "strip_size_kb": 0, 00:23:06.790 "state": "online", 00:23:06.790 "raid_level": "raid1", 00:23:06.790 "superblock": true, 00:23:06.790 "num_base_bdevs": 2, 00:23:06.790 "num_base_bdevs_discovered": 1, 00:23:06.790 "num_base_bdevs_operational": 1, 00:23:06.790 "base_bdevs_list": [ 00:23:06.790 { 00:23:06.790 "name": null, 00:23:06.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.790 "is_configured": false, 00:23:06.790 "data_offset": 0, 00:23:06.790 "data_size": 7936 00:23:06.790 }, 00:23:06.790 { 00:23:06.790 "name": "BaseBdev2", 00:23:06.790 "uuid": "c9e4e791-0c28-55a0-8a60-a150e8fbe5e7", 00:23:06.790 "is_configured": true, 00:23:06.790 "data_offset": 256, 00:23:06.790 "data_size": 7936 00:23:06.790 } 00:23:06.790 ] 00:23:06.790 }' 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89290 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89290 ']' 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89290 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.790 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89290 00:23:07.049 killing process with pid 89290 00:23:07.049 Received shutdown signal, test time was about 60.000000 seconds 00:23:07.049 00:23:07.049 Latency(us) 00:23:07.049 [2024-11-27T14:22:38.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.049 [2024-11-27T14:22:38.005Z] =================================================================================================================== 00:23:07.049 [2024-11-27T14:22:38.005Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.049 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:07.049 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:07.049 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89290' 00:23:07.049 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89290 00:23:07.049 [2024-11-27 14:22:37.767834] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:07.049 [2024-11-27 14:22:37.767975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:07.049 14:22:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89290 00:23:07.049 [2024-11-27 14:22:37.768024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:07.049 [2024-11-27 14:22:37.768035] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:07.308 [2024-11-27 14:22:38.066762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:08.244 14:22:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:23:08.244 00:23:08.244 real 0m17.418s 00:23:08.244 user 0m22.749s 00:23:08.244 sys 0m1.668s 00:23:08.244 ************************************ 00:23:08.244 END TEST raid_rebuild_test_sb_md_interleaved 00:23:08.244 ************************************ 00:23:08.244 14:22:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.244 14:22:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.502 14:22:39 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:23:08.502 14:22:39 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:23:08.502 14:22:39 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89290 ']' 00:23:08.502 14:22:39 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89290 00:23:08.502 14:22:39 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:23:08.502 00:23:08.502 real 12m17.073s 00:23:08.502 user 16m40.095s 00:23:08.502 sys 1m52.472s 00:23:08.502 ************************************ 00:23:08.502 END TEST bdev_raid 00:23:08.502 ************************************ 00:23:08.502 14:22:39 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:08.502 14:22:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:08.502 14:22:39 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:08.502 14:22:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:08.502 14:22:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:08.502 14:22:39 -- common/autotest_common.sh@10 -- # set +x 00:23:08.502 ************************************ 00:23:08.502 START TEST spdkcli_raid 00:23:08.502 ************************************ 00:23:08.502 14:22:39 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:08.502 * Looking for test storage... 00:23:08.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:08.502 14:22:39 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:08.502 14:22:39 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:08.502 14:22:39 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:08.760 14:22:39 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:08.760 14:22:39 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:08.760 14:22:39 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:08.760 14:22:39 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:08.760 14:22:39 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.760 14:22:39 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:08.761 14:22:39 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:08.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.761 --rc genhtml_branch_coverage=1 00:23:08.761 --rc genhtml_function_coverage=1 00:23:08.761 --rc genhtml_legend=1 00:23:08.761 --rc geninfo_all_blocks=1 00:23:08.761 --rc geninfo_unexecuted_blocks=1 00:23:08.761 00:23:08.761 ' 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:08.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.761 --rc genhtml_branch_coverage=1 00:23:08.761 --rc genhtml_function_coverage=1 00:23:08.761 --rc genhtml_legend=1 00:23:08.761 --rc geninfo_all_blocks=1 00:23:08.761 --rc geninfo_unexecuted_blocks=1 00:23:08.761 00:23:08.761 ' 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:08.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.761 --rc genhtml_branch_coverage=1 00:23:08.761 --rc genhtml_function_coverage=1 00:23:08.761 --rc genhtml_legend=1 00:23:08.761 --rc geninfo_all_blocks=1 00:23:08.761 --rc geninfo_unexecuted_blocks=1 00:23:08.761 00:23:08.761 ' 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:08.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.761 --rc genhtml_branch_coverage=1 00:23:08.761 --rc genhtml_function_coverage=1 00:23:08.761 --rc genhtml_legend=1 00:23:08.761 --rc geninfo_all_blocks=1 00:23:08.761 --rc geninfo_unexecuted_blocks=1 00:23:08.761 00:23:08.761 ' 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:08.761 14:22:39 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:08.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89967 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89967 00:23:08.761 14:22:39 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89967 ']' 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.761 14:22:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:08.761 [2024-11-27 14:22:39.661611] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:08.761 [2024-11-27 14:22:39.661861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89967 ] 00:23:09.019 [2024-11-27 14:22:39.844473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:09.019 [2024-11-27 14:22:39.958300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.019 [2024-11-27 14:22:39.958352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.955 14:22:40 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.955 14:22:40 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:23:09.955 14:22:40 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:23:09.955 14:22:40 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.955 14:22:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:09.955 14:22:40 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:23:09.955 14:22:40 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.955 14:22:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:09.955 14:22:40 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:09.955 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:09.955 ' 00:23:11.856 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:23:11.856 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:23:11.856 14:22:42 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:23:11.856 14:22:42 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.856 14:22:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:11.856 14:22:42 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:23:11.856 14:22:42 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.856 14:22:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:11.856 14:22:42 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:23:11.856 ' 00:23:12.794 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:23:13.053 14:22:43 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:23:13.053 14:22:43 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.053 14:22:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:13.053 14:22:43 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:23:13.053 14:22:43 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.053 14:22:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:13.053 14:22:43 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:23:13.053 14:22:43 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:23:13.621 14:22:44 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:23:13.621 14:22:44 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:23:13.621 14:22:44 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:23:13.621 14:22:44 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.621 14:22:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:13.621 14:22:44 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:23:13.621 14:22:44 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.621 14:22:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:13.621 14:22:44 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:23:13.621 ' 00:23:14.557 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:23:14.816 14:22:45 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:23:14.816 14:22:45 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.816 14:22:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:14.816 14:22:45 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:23:14.816 14:22:45 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.816 14:22:45 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:14.816 14:22:45 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:23:14.816 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:23:14.816 ' 00:23:16.194 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:23:16.194 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:23:16.194 14:22:47 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:23:16.194 14:22:47 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.194 14:22:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:16.453 14:22:47 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89967 00:23:16.453 14:22:47 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89967 ']' 00:23:16.453 14:22:47 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89967 00:23:16.453 14:22:47 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:23:16.453 14:22:47 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.453 14:22:47 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89967 00:23:16.454 14:22:47 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:16.454 14:22:47 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:16.454 14:22:47 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89967' 00:23:16.454 killing process with pid 89967 00:23:16.454 14:22:47 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89967 00:23:16.454 14:22:47 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89967 00:23:18.992 14:22:49 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:23:18.992 14:22:49 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89967 ']' 00:23:18.992 14:22:49 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89967 00:23:18.992 14:22:49 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89967 ']' 00:23:18.992 14:22:49 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89967 00:23:18.992 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89967) - No such process 00:23:18.992 Process with pid 89967 is not found 00:23:18.992 14:22:49 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89967 is not found' 00:23:18.992 14:22:49 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:23:18.992 14:22:49 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:18.992 14:22:49 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:18.992 14:22:49 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:18.992 00:23:18.992 real 0m10.367s 00:23:18.992 user 0m21.353s 00:23:18.992 sys 0m1.204s 00:23:18.992 14:22:49 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.992 14:22:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.992 ************************************ 00:23:18.992 END TEST spdkcli_raid 00:23:18.992 ************************************ 00:23:18.992 14:22:49 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:23:18.992 14:22:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.992 14:22:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.992 14:22:49 -- common/autotest_common.sh@10 -- # set +x 00:23:18.992 ************************************ 00:23:18.992 START TEST blockdev_raid5f 00:23:18.992 ************************************ 00:23:18.992 14:22:49 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:23:18.992 * Looking for test storage... 00:23:18.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:23:18.992 14:22:49 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:18.992 14:22:49 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:23:18.992 14:22:49 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:18.992 14:22:49 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.992 14:22:49 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.251 14:22:49 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:23:19.251 14:22:49 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.251 14:22:49 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:19.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.251 --rc genhtml_branch_coverage=1 00:23:19.251 --rc genhtml_function_coverage=1 00:23:19.251 --rc genhtml_legend=1 00:23:19.251 --rc geninfo_all_blocks=1 00:23:19.251 --rc geninfo_unexecuted_blocks=1 00:23:19.251 00:23:19.251 ' 00:23:19.251 14:22:49 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:19.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.251 --rc genhtml_branch_coverage=1 00:23:19.251 --rc genhtml_function_coverage=1 00:23:19.251 --rc genhtml_legend=1 00:23:19.251 --rc geninfo_all_blocks=1 00:23:19.251 --rc geninfo_unexecuted_blocks=1 00:23:19.251 00:23:19.251 ' 00:23:19.251 14:22:49 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:19.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.251 --rc genhtml_branch_coverage=1 00:23:19.251 --rc genhtml_function_coverage=1 00:23:19.251 --rc genhtml_legend=1 00:23:19.251 --rc geninfo_all_blocks=1 00:23:19.251 --rc geninfo_unexecuted_blocks=1 00:23:19.251 00:23:19.251 ' 00:23:19.251 14:22:49 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:19.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.251 --rc genhtml_branch_coverage=1 00:23:19.251 --rc genhtml_function_coverage=1 00:23:19.251 --rc genhtml_legend=1 00:23:19.251 --rc geninfo_all_blocks=1 00:23:19.251 --rc geninfo_unexecuted_blocks=1 00:23:19.251 00:23:19.251 ' 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:23:19.251 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90253 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:23:19.252 14:22:49 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90253 00:23:19.252 14:22:49 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90253 ']' 00:23:19.252 14:22:49 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.252 14:22:49 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.252 14:22:49 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.252 14:22:49 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.252 14:22:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:19.252 [2024-11-27 14:22:50.088269] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:19.252 [2024-11-27 14:22:50.088460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90253 ] 00:23:19.511 [2024-11-27 14:22:50.265941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.511 [2024-11-27 14:22:50.387622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.446 14:22:51 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.446 14:22:51 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:23:20.446 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:23:20.446 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:23:20.446 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:23:20.446 14:22:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.446 14:22:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:20.446 Malloc0 00:23:20.446 Malloc1 00:23:20.446 Malloc2 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "0b75906a-be76-4caf-aebe-1cc27825bdbd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0b75906a-be76-4caf-aebe-1cc27825bdbd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "0b75906a-be76-4caf-aebe-1cc27825bdbd",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "096adbf1-b9a6-460c-a242-f142f9b6b28a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ab624010-e376-400f-a987-34b2d2e44189",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b15f136c-8f69-4583-b3af-f8fc732950e9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:23:20.705 14:22:51 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90253 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90253 ']' 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90253 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90253 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90253' 00:23:20.705 killing process with pid 90253 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90253 00:23:20.705 14:22:51 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90253 00:23:23.997 14:22:54 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:23.997 14:22:54 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:23:23.997 14:22:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:23.997 14:22:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.997 14:22:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:23.997 ************************************ 00:23:23.997 START TEST bdev_hello_world 00:23:23.997 ************************************ 00:23:23.997 14:22:54 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:23:23.997 [2024-11-27 14:22:54.431804] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:23.997 [2024-11-27 14:22:54.432094] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90325 ] 00:23:23.997 [2024-11-27 14:22:54.609452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.997 [2024-11-27 14:22:54.726528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.581 [2024-11-27 14:22:55.265575] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:23:24.581 [2024-11-27 14:22:55.265697] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:23:24.581 [2024-11-27 14:22:55.265718] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:23:24.581 [2024-11-27 14:22:55.266195] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:23:24.581 [2024-11-27 14:22:55.266345] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:23:24.581 [2024-11-27 14:22:55.266361] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:23:24.581 [2024-11-27 14:22:55.266410] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:23:24.581 00:23:24.581 [2024-11-27 14:22:55.266428] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:23:25.959 ************************************ 00:23:25.959 END TEST bdev_hello_world 00:23:25.959 ************************************ 00:23:25.959 00:23:25.959 real 0m2.367s 00:23:25.959 user 0m1.995s 00:23:25.959 sys 0m0.252s 00:23:25.959 14:22:56 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:25.959 14:22:56 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:25.959 14:22:56 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:23:25.959 14:22:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:25.959 14:22:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:25.959 14:22:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:25.959 ************************************ 00:23:25.959 START TEST bdev_bounds 00:23:25.959 ************************************ 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90370 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:23:25.959 Process bdevio pid: 90370 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90370' 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90370 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90370 ']' 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.959 14:22:56 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:25.959 [2024-11-27 14:22:56.868821] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:25.959 [2024-11-27 14:22:56.869075] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90370 ] 00:23:26.218 [2024-11-27 14:22:57.050503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:26.218 [2024-11-27 14:22:57.167775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.218 [2024-11-27 14:22:57.167975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.218 [2024-11-27 14:22:57.168014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.786 14:22:57 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.786 14:22:57 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:23:26.786 14:22:57 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:27.044 I/O targets: 00:23:27.044 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:23:27.044 00:23:27.044 00:23:27.044 CUnit - A unit testing framework for C - Version 2.1-3 00:23:27.044 http://cunit.sourceforge.net/ 00:23:27.044 00:23:27.044 00:23:27.044 Suite: bdevio tests on: raid5f 00:23:27.044 Test: blockdev write read block ...passed 00:23:27.044 Test: blockdev write zeroes read block ...passed 00:23:27.044 Test: blockdev write zeroes read no split ...passed 00:23:27.044 Test: blockdev write zeroes read split ...passed 00:23:27.303 Test: blockdev write zeroes read split partial ...passed 00:23:27.303 Test: blockdev reset ...passed 00:23:27.303 Test: blockdev write read 8 blocks ...passed 00:23:27.303 Test: blockdev write read size > 128k ...passed 00:23:27.303 Test: blockdev write read invalid size ...passed 00:23:27.303 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:27.303 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:27.303 Test: blockdev write read max offset ...passed 00:23:27.303 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:27.303 Test: blockdev writev readv 8 blocks ...passed 00:23:27.303 Test: blockdev writev readv 30 x 1block ...passed 00:23:27.303 Test: blockdev writev readv block ...passed 00:23:27.303 Test: blockdev writev readv size > 128k ...passed 00:23:27.303 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:27.303 Test: blockdev comparev and writev ...passed 00:23:27.303 Test: blockdev nvme passthru rw ...passed 00:23:27.303 Test: blockdev nvme passthru vendor specific ...passed 00:23:27.303 Test: blockdev nvme admin passthru ...passed 00:23:27.303 Test: blockdev copy ...passed 00:23:27.303 00:23:27.303 Run Summary: Type Total Ran Passed Failed Inactive 00:23:27.303 suites 1 1 n/a 0 0 00:23:27.303 tests 23 23 23 0 0 00:23:27.303 asserts 130 130 130 0 n/a 00:23:27.303 00:23:27.303 Elapsed time = 0.670 seconds 00:23:27.303 0 00:23:27.303 14:22:58 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90370 00:23:27.303 14:22:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90370 ']' 00:23:27.303 14:22:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90370 00:23:27.303 14:22:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:23:27.303 14:22:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.303 14:22:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90370 00:23:27.303 14:22:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:27.303 14:22:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:27.303 14:22:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90370' 00:23:27.303 killing process with pid 90370 00:23:27.303 14:22:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90370 00:23:27.303 14:22:58 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90370 00:23:28.680 14:22:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:28.680 00:23:28.680 real 0m2.799s 00:23:28.680 user 0m6.918s 00:23:28.680 sys 0m0.408s 00:23:28.680 14:22:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.680 14:22:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:28.680 ************************************ 00:23:28.680 END TEST bdev_bounds 00:23:28.680 ************************************ 00:23:28.680 14:22:59 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:23:28.680 14:22:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:28.680 14:22:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.680 14:22:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:28.939 ************************************ 00:23:28.939 START TEST bdev_nbd 00:23:28.939 ************************************ 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:28.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90430 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90430 /var/tmp/spdk-nbd.sock 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90430 ']' 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:28.939 14:22:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.940 14:22:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:28.940 14:22:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.940 14:22:59 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:28.940 [2024-11-27 14:22:59.745766] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:28.940 [2024-11-27 14:22:59.746022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.198 [2024-11-27 14:22:59.915248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.198 [2024-11-27 14:23:00.029399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:23:29.767 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:30.050 1+0 records in 00:23:30.050 1+0 records out 00:23:30.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411358 s, 10.0 MB/s 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:23:30.050 14:23:00 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:30.327 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:30.327 { 00:23:30.327 "nbd_device": "/dev/nbd0", 00:23:30.327 "bdev_name": "raid5f" 00:23:30.327 } 00:23:30.327 ]' 00:23:30.327 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:30.327 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:30.327 { 00:23:30.327 "nbd_device": "/dev/nbd0", 00:23:30.327 "bdev_name": "raid5f" 00:23:30.327 } 00:23:30.327 ]' 00:23:30.327 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:30.327 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:30.327 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:30.327 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:30.327 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:30.327 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:30.327 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:30.327 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:30.585 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:30.585 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:30.585 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:30.585 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:30.585 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:30.585 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:30.585 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:30.585 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:30.585 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:30.585 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:30.585 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:30.843 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:23:31.101 /dev/nbd0 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:31.101 1+0 records in 00:23:31.101 1+0 records out 00:23:31.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500849 s, 8.2 MB/s 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.101 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:31.102 14:23:01 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:31.102 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:31.102 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:31.102 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:31.102 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:31.102 14:23:01 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:31.360 { 00:23:31.360 "nbd_device": "/dev/nbd0", 00:23:31.360 "bdev_name": "raid5f" 00:23:31.360 } 00:23:31.360 ]' 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:31.360 { 00:23:31.360 "nbd_device": "/dev/nbd0", 00:23:31.360 "bdev_name": "raid5f" 00:23:31.360 } 00:23:31.360 ]' 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:31.360 256+0 records in 00:23:31.360 256+0 records out 00:23:31.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00554368 s, 189 MB/s 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:31.360 256+0 records in 00:23:31.360 256+0 records out 00:23:31.360 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287367 s, 36.5 MB/s 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:31.360 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:31.620 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:31.620 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:31.620 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:31.620 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:31.620 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:31.620 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:31.620 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:31.620 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:31.620 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:31.620 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:31.620 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:31.879 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:32.138 malloc_lvol_verify 00:23:32.138 14:23:02 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:32.396 87e48b55-146c-4027-979e-eb0fdf02ab95 00:23:32.396 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:32.655 56d7ece8-fbf0-48ec-8367-e927a475ecc0 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:32.655 /dev/nbd0 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:32.655 mke2fs 1.47.0 (5-Feb-2023) 00:23:32.655 Discarding device blocks: 0/4096 done 00:23:32.655 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:32.655 00:23:32.655 Allocating group tables: 0/1 done 00:23:32.655 Writing inode tables: 0/1 done 00:23:32.655 Creating journal (1024 blocks): done 00:23:32.655 Writing superblocks and filesystem accounting information: 0/1 done 00:23:32.655 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:32.655 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90430 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90430 ']' 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90430 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90430 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.914 killing process with pid 90430 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90430' 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90430 00:23:32.914 14:23:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90430 00:23:34.833 14:23:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:34.833 00:23:34.833 real 0m5.679s 00:23:34.833 user 0m7.693s 00:23:34.833 sys 0m1.310s 00:23:34.833 14:23:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.833 14:23:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:34.833 ************************************ 00:23:34.833 END TEST bdev_nbd 00:23:34.833 ************************************ 00:23:34.833 14:23:05 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:23:34.833 14:23:05 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:23:34.833 14:23:05 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:23:34.833 14:23:05 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:23:34.833 14:23:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:34.833 14:23:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.833 14:23:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:34.833 ************************************ 00:23:34.833 START TEST bdev_fio 00:23:34.833 ************************************ 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:34.833 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:34.833 ************************************ 00:23:34.833 START TEST bdev_fio_rw_verify 00:23:34.833 ************************************ 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.833 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:23:34.834 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:34.834 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:34.834 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:23:34.834 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:34.834 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:34.834 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:34.834 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:34.834 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:23:34.834 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:34.834 14:23:05 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:34.834 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:34.834 fio-3.35 00:23:34.834 Starting 1 thread 00:23:47.039 00:23:47.039 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90634: Wed Nov 27 14:23:16 2024 00:23:47.039 read: IOPS=10.9k, BW=42.4MiB/s (44.5MB/s)(424MiB/10001msec) 00:23:47.039 slat (nsec): min=18491, max=96658, avg=22593.91, stdev=3327.03 00:23:47.039 clat (usec): min=9, max=427, avg=146.58, stdev=55.56 00:23:47.039 lat (usec): min=29, max=451, avg=169.17, stdev=56.42 00:23:47.039 clat percentiles (usec): 00:23:47.039 | 50.000th=[ 149], 99.000th=[ 273], 99.900th=[ 314], 99.990th=[ 375], 00:23:47.039 | 99.999th=[ 404] 00:23:47.039 write: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(441MiB/9874msec); 0 zone resets 00:23:47.039 slat (usec): min=8, max=192, avg=18.50, stdev= 4.46 00:23:47.039 clat (usec): min=64, max=1306, avg=334.27, stdev=53.60 00:23:47.039 lat (usec): min=82, max=1393, avg=352.77, stdev=55.43 00:23:47.039 clat percentiles (usec): 00:23:47.039 | 50.000th=[ 334], 99.000th=[ 478], 99.900th=[ 627], 99.990th=[ 1020], 00:23:47.039 | 99.999th=[ 1237] 00:23:47.039 bw ( KiB/s): min=40536, max=49176, per=98.75%, avg=45141.47, stdev=2428.65, samples=19 00:23:47.039 iops : min=10134, max=12294, avg=11285.37, stdev=607.16, samples=19 00:23:47.039 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=12.33%, 250=37.21% 00:23:47.039 lat (usec) : 500=50.17%, 750=0.26%, 1000=0.01% 00:23:47.039 lat (msec) : 2=0.01% 00:23:47.039 cpu : usr=98.87%, sys=0.48%, ctx=65, majf=0, minf=9082 00:23:47.039 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:47.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.039 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:47.039 issued rwts: total=108556,112842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:47.039 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:47.039 00:23:47.039 Run status group 0 (all jobs): 00:23:47.039 READ: bw=42.4MiB/s (44.5MB/s), 42.4MiB/s-42.4MiB/s (44.5MB/s-44.5MB/s), io=424MiB (445MB), run=10001-10001msec 00:23:47.039 WRITE: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=441MiB (462MB), run=9874-9874msec 00:23:47.610 ----------------------------------------------------- 00:23:47.610 Suppressions used: 00:23:47.610 count bytes template 00:23:47.610 1 7 /usr/src/fio/parse.c 00:23:47.610 902 86592 /usr/src/fio/iolog.c 00:23:47.610 1 8 libtcmalloc_minimal.so 00:23:47.610 1 904 libcrypto.so 00:23:47.610 ----------------------------------------------------- 00:23:47.610 00:23:47.610 00:23:47.610 real 0m12.822s 00:23:47.610 user 0m12.568s 00:23:47.610 sys 0m0.821s 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:47.610 ************************************ 00:23:47.610 END TEST bdev_fio_rw_verify 00:23:47.610 ************************************ 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "0b75906a-be76-4caf-aebe-1cc27825bdbd"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0b75906a-be76-4caf-aebe-1cc27825bdbd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "0b75906a-be76-4caf-aebe-1cc27825bdbd",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "096adbf1-b9a6-460c-a242-f142f9b6b28a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ab624010-e376-400f-a987-34b2d2e44189",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "b15f136c-8f69-4583-b3af-f8fc732950e9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:47.610 /home/vagrant/spdk_repo/spdk 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:47.610 00:23:47.610 real 0m13.096s 00:23:47.610 user 0m12.677s 00:23:47.610 sys 0m0.965s 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.610 14:23:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:47.610 ************************************ 00:23:47.610 END TEST bdev_fio 00:23:47.610 ************************************ 00:23:47.610 14:23:18 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:47.610 14:23:18 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:47.610 14:23:18 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:47.610 14:23:18 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.610 14:23:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:47.610 ************************************ 00:23:47.610 START TEST bdev_verify 00:23:47.610 ************************************ 00:23:47.610 14:23:18 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:47.870 [2024-11-27 14:23:18.634738] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:47.870 [2024-11-27 14:23:18.634852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90792 ] 00:23:47.870 [2024-11-27 14:23:18.809631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:48.130 [2024-11-27 14:23:18.933138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.130 [2024-11-27 14:23:18.933204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.697 Running I/O for 5 seconds... 00:23:50.570 9375.00 IOPS, 36.62 MiB/s [2024-11-27T14:23:22.918Z] 9486.00 IOPS, 37.05 MiB/s [2024-11-27T14:23:23.486Z] 9563.33 IOPS, 37.36 MiB/s [2024-11-27T14:23:24.863Z] 9660.50 IOPS, 37.74 MiB/s [2024-11-27T14:23:24.863Z] 9697.60 IOPS, 37.88 MiB/s 00:23:53.907 Latency(us) 00:23:53.907 [2024-11-27T14:23:24.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.907 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:53.907 Verification LBA range: start 0x0 length 0x2000 00:23:53.907 raid5f : 5.02 3920.17 15.31 0.00 0.00 49103.36 284.39 36631.48 00:23:53.907 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:53.907 Verification LBA range: start 0x2000 length 0x2000 00:23:53.907 raid5f : 5.01 5772.32 22.55 0.00 0.00 33461.73 236.10 25756.51 00:23:53.907 [2024-11-27T14:23:24.863Z] =================================================================================================================== 00:23:53.907 [2024-11-27T14:23:24.863Z] Total : 9692.49 37.86 0.00 0.00 39791.93 236.10 36631.48 00:23:55.284 00:23:55.284 real 0m7.414s 00:23:55.284 user 0m13.695s 00:23:55.284 sys 0m0.265s 00:23:55.284 14:23:25 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.284 14:23:25 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:55.284 ************************************ 00:23:55.285 END TEST bdev_verify 00:23:55.285 ************************************ 00:23:55.285 14:23:26 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:55.285 14:23:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:55.285 14:23:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.285 14:23:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:55.285 ************************************ 00:23:55.285 START TEST bdev_verify_big_io 00:23:55.285 ************************************ 00:23:55.285 14:23:26 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:55.285 [2024-11-27 14:23:26.134891] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:23:55.285 [2024-11-27 14:23:26.135005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90890 ] 00:23:55.543 [2024-11-27 14:23:26.310596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:55.543 [2024-11-27 14:23:26.430354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.543 [2024-11-27 14:23:26.430391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.110 Running I/O for 5 seconds... 00:23:58.425 758.00 IOPS, 47.38 MiB/s [2024-11-27T14:23:30.346Z] 854.50 IOPS, 53.41 MiB/s [2024-11-27T14:23:31.285Z] 846.00 IOPS, 52.88 MiB/s [2024-11-27T14:23:32.222Z] 888.50 IOPS, 55.53 MiB/s [2024-11-27T14:23:32.222Z] 926.40 IOPS, 57.90 MiB/s 00:24:01.266 Latency(us) 00:24:01.266 [2024-11-27T14:23:32.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.266 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:01.266 Verification LBA range: start 0x0 length 0x200 00:24:01.266 raid5f : 5.21 463.14 28.95 0.00 0.00 6856695.30 169.92 298546.53 00:24:01.266 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:01.266 Verification LBA range: start 0x200 length 0x200 00:24:01.266 raid5f : 5.13 470.14 29.38 0.00 0.00 6745948.73 192.28 291220.23 00:24:01.266 [2024-11-27T14:23:32.222Z] =================================================================================================================== 00:24:01.266 [2024-11-27T14:23:32.222Z] Total : 933.28 58.33 0.00 0.00 6801322.01 169.92 298546.53 00:24:03.177 00:24:03.177 real 0m7.618s 00:24:03.177 user 0m14.098s 00:24:03.177 sys 0m0.276s 00:24:03.177 14:23:33 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.177 14:23:33 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:24:03.177 ************************************ 00:24:03.177 END TEST bdev_verify_big_io 00:24:03.177 ************************************ 00:24:03.177 14:23:33 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:03.177 14:23:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:03.177 14:23:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.177 14:23:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:03.177 ************************************ 00:24:03.177 START TEST bdev_write_zeroes 00:24:03.177 ************************************ 00:24:03.177 14:23:33 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:03.177 [2024-11-27 14:23:33.797543] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:03.177 [2024-11-27 14:23:33.797663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90989 ] 00:24:03.177 [2024-11-27 14:23:33.971531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.177 [2024-11-27 14:23:34.090169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.746 Running I/O for 1 seconds... 00:24:04.687 26391.00 IOPS, 103.09 MiB/s 00:24:04.687 Latency(us) 00:24:04.687 [2024-11-27T14:23:35.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.687 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:04.687 raid5f : 1.01 26347.16 102.92 0.00 0.00 4842.41 1387.99 7440.77 00:24:04.687 [2024-11-27T14:23:35.643Z] =================================================================================================================== 00:24:04.687 [2024-11-27T14:23:35.643Z] Total : 26347.16 102.92 0.00 0.00 4842.41 1387.99 7440.77 00:24:06.608 00:24:06.608 real 0m3.340s 00:24:06.608 user 0m2.977s 00:24:06.608 sys 0m0.235s 00:24:06.608 14:23:37 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.608 14:23:37 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:24:06.608 ************************************ 00:24:06.608 END TEST bdev_write_zeroes 00:24:06.608 ************************************ 00:24:06.608 14:23:37 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:06.608 14:23:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:06.608 14:23:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.608 14:23:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:06.608 ************************************ 00:24:06.608 START TEST bdev_json_nonenclosed 00:24:06.608 ************************************ 00:24:06.608 14:23:37 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:06.608 [2024-11-27 14:23:37.208493] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:06.608 [2024-11-27 14:23:37.208637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91042 ] 00:24:06.608 [2024-11-27 14:23:37.389699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.608 [2024-11-27 14:23:37.508710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.609 [2024-11-27 14:23:37.508811] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:24:06.609 [2024-11-27 14:23:37.508837] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:06.609 [2024-11-27 14:23:37.508847] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:06.881 00:24:06.881 real 0m0.652s 00:24:06.881 user 0m0.406s 00:24:06.881 sys 0m0.142s 00:24:06.881 14:23:37 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.881 14:23:37 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:24:06.881 ************************************ 00:24:06.881 END TEST bdev_json_nonenclosed 00:24:06.881 ************************************ 00:24:06.881 14:23:37 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:06.881 14:23:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:06.881 14:23:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.881 14:23:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:06.881 ************************************ 00:24:06.881 START TEST bdev_json_nonarray 00:24:06.881 ************************************ 00:24:06.881 14:23:37 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:07.141 [2024-11-27 14:23:37.913623] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 24.03.0 initialization... 00:24:07.141 [2024-11-27 14:23:37.913751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91073 ] 00:24:07.141 [2024-11-27 14:23:38.088438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.401 [2024-11-27 14:23:38.207582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.401 [2024-11-27 14:23:38.207683] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:24:07.401 [2024-11-27 14:23:38.207700] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:07.401 [2024-11-27 14:23:38.207719] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:07.661 00:24:07.661 real 0m0.628s 00:24:07.661 user 0m0.402s 00:24:07.661 sys 0m0.122s 00:24:07.661 14:23:38 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.661 ************************************ 00:24:07.661 END TEST bdev_json_nonarray 00:24:07.661 ************************************ 00:24:07.661 14:23:38 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:24:07.661 14:23:38 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:24:07.661 14:23:38 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:24:07.661 14:23:38 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:24:07.661 14:23:38 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:24:07.661 14:23:38 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:24:07.661 14:23:38 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:24:07.661 14:23:38 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:07.661 14:23:38 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:24:07.661 14:23:38 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:24:07.661 14:23:38 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:24:07.661 14:23:38 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:24:07.661 00:24:07.661 real 0m48.778s 00:24:07.661 user 1m5.520s 00:24:07.661 sys 0m5.075s 00:24:07.661 14:23:38 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.661 14:23:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:07.661 ************************************ 00:24:07.661 END TEST blockdev_raid5f 00:24:07.661 ************************************ 00:24:07.661 14:23:38 -- spdk/autotest.sh@194 -- # uname -s 00:24:07.661 14:23:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:24:07.661 14:23:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:07.661 14:23:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:07.661 14:23:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:24:07.661 14:23:38 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:24:07.662 14:23:38 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:07.662 14:23:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.662 14:23:38 -- common/autotest_common.sh@10 -- # set +x 00:24:07.921 14:23:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:07.921 14:23:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:07.921 14:23:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:07.921 14:23:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:07.921 14:23:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:07.921 14:23:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:07.921 14:23:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:07.921 14:23:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.921 14:23:38 -- common/autotest_common.sh@10 -- # set +x 00:24:07.921 14:23:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:07.921 14:23:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:07.921 14:23:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:07.921 14:23:38 -- common/autotest_common.sh@10 -- # set +x 00:24:09.826 INFO: APP EXITING 00:24:09.826 INFO: killing all VMs 00:24:09.826 INFO: killing vhost app 00:24:09.826 INFO: EXIT DONE 00:24:10.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:10.086 Waiting for block devices as requested 00:24:10.345 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:10.345 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:11.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:11.284 Cleaning 00:24:11.284 Removing: /var/run/dpdk/spdk0/config 00:24:11.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:11.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:11.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:11.284 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:11.284 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:11.284 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:11.284 Removing: /dev/shm/spdk_tgt_trace.pid57059 00:24:11.284 Removing: /var/run/dpdk/spdk0 00:24:11.284 Removing: /var/run/dpdk/spdk_pid56813 00:24:11.284 Removing: /var/run/dpdk/spdk_pid57059 00:24:11.284 Removing: /var/run/dpdk/spdk_pid57288 00:24:11.284 Removing: /var/run/dpdk/spdk_pid57392 00:24:11.284 Removing: /var/run/dpdk/spdk_pid57448 00:24:11.284 Removing: /var/run/dpdk/spdk_pid57587 00:24:11.284 Removing: /var/run/dpdk/spdk_pid57605 00:24:11.284 Removing: /var/run/dpdk/spdk_pid57815 00:24:11.284 Removing: /var/run/dpdk/spdk_pid57928 00:24:11.284 Removing: /var/run/dpdk/spdk_pid58035 00:24:11.284 Removing: /var/run/dpdk/spdk_pid58162 00:24:11.284 Removing: /var/run/dpdk/spdk_pid58270 00:24:11.284 Removing: /var/run/dpdk/spdk_pid58310 00:24:11.284 Removing: /var/run/dpdk/spdk_pid58352 00:24:11.284 Removing: /var/run/dpdk/spdk_pid58424 00:24:11.284 Removing: /var/run/dpdk/spdk_pid58512 00:24:11.284 Removing: /var/run/dpdk/spdk_pid58967 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59046 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59121 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59143 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59294 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59311 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59465 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59486 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59558 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59576 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59645 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59669 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59864 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59906 00:24:11.284 Removing: /var/run/dpdk/spdk_pid59995 00:24:11.284 Removing: /var/run/dpdk/spdk_pid61371 00:24:11.284 Removing: /var/run/dpdk/spdk_pid61583 00:24:11.284 Removing: /var/run/dpdk/spdk_pid61729 00:24:11.284 Removing: /var/run/dpdk/spdk_pid62376 00:24:11.284 Removing: /var/run/dpdk/spdk_pid62589 00:24:11.284 Removing: /var/run/dpdk/spdk_pid62729 00:24:11.284 Removing: /var/run/dpdk/spdk_pid63378 00:24:11.284 Removing: /var/run/dpdk/spdk_pid63714 00:24:11.284 Removing: /var/run/dpdk/spdk_pid63859 00:24:11.284 Removing: /var/run/dpdk/spdk_pid65261 00:24:11.284 Removing: /var/run/dpdk/spdk_pid65514 00:24:11.284 Removing: /var/run/dpdk/spdk_pid65661 00:24:11.284 Removing: /var/run/dpdk/spdk_pid67057 00:24:11.284 Removing: /var/run/dpdk/spdk_pid67310 00:24:11.284 Removing: /var/run/dpdk/spdk_pid67461 00:24:11.284 Removing: /var/run/dpdk/spdk_pid68863 00:24:11.284 Removing: /var/run/dpdk/spdk_pid69303 00:24:11.284 Removing: /var/run/dpdk/spdk_pid69449 00:24:11.543 Removing: /var/run/dpdk/spdk_pid70949 00:24:11.543 Removing: /var/run/dpdk/spdk_pid71214 00:24:11.543 Removing: /var/run/dpdk/spdk_pid71367 00:24:11.543 Removing: /var/run/dpdk/spdk_pid72859 00:24:11.543 Removing: /var/run/dpdk/spdk_pid73124 00:24:11.543 Removing: /var/run/dpdk/spdk_pid73275 00:24:11.543 Removing: /var/run/dpdk/spdk_pid74778 00:24:11.543 Removing: /var/run/dpdk/spdk_pid75260 00:24:11.543 Removing: /var/run/dpdk/spdk_pid75411 00:24:11.543 Removing: /var/run/dpdk/spdk_pid75549 00:24:11.543 Removing: /var/run/dpdk/spdk_pid75973 00:24:11.543 Removing: /var/run/dpdk/spdk_pid76708 00:24:11.543 Removing: /var/run/dpdk/spdk_pid77094 00:24:11.543 Removing: /var/run/dpdk/spdk_pid77778 00:24:11.543 Removing: /var/run/dpdk/spdk_pid78220 00:24:11.543 Removing: /var/run/dpdk/spdk_pid78979 00:24:11.543 Removing: /var/run/dpdk/spdk_pid79388 00:24:11.543 Removing: /var/run/dpdk/spdk_pid81355 00:24:11.543 Removing: /var/run/dpdk/spdk_pid81800 00:24:11.543 Removing: /var/run/dpdk/spdk_pid82246 00:24:11.543 Removing: /var/run/dpdk/spdk_pid84357 00:24:11.543 Removing: /var/run/dpdk/spdk_pid84842 00:24:11.543 Removing: /var/run/dpdk/spdk_pid85369 00:24:11.543 Removing: /var/run/dpdk/spdk_pid86427 00:24:11.543 Removing: /var/run/dpdk/spdk_pid86761 00:24:11.543 Removing: /var/run/dpdk/spdk_pid87702 00:24:11.543 Removing: /var/run/dpdk/spdk_pid88027 00:24:11.543 Removing: /var/run/dpdk/spdk_pid88965 00:24:11.543 Removing: /var/run/dpdk/spdk_pid89290 00:24:11.543 Removing: /var/run/dpdk/spdk_pid89967 00:24:11.543 Removing: /var/run/dpdk/spdk_pid90253 00:24:11.543 Removing: /var/run/dpdk/spdk_pid90325 00:24:11.543 Removing: /var/run/dpdk/spdk_pid90370 00:24:11.543 Removing: /var/run/dpdk/spdk_pid90619 00:24:11.543 Removing: /var/run/dpdk/spdk_pid90792 00:24:11.543 Removing: /var/run/dpdk/spdk_pid90890 00:24:11.543 Removing: /var/run/dpdk/spdk_pid90989 00:24:11.543 Removing: /var/run/dpdk/spdk_pid91042 00:24:11.543 Removing: /var/run/dpdk/spdk_pid91073 00:24:11.543 Clean 00:24:11.543 14:23:42 -- common/autotest_common.sh@1453 -- # return 0 00:24:11.543 14:23:42 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:11.543 14:23:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.543 14:23:42 -- common/autotest_common.sh@10 -- # set +x 00:24:11.543 14:23:42 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:11.543 14:23:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.543 14:23:42 -- common/autotest_common.sh@10 -- # set +x 00:24:11.803 14:23:42 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:11.803 14:23:42 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:11.803 14:23:42 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:11.803 14:23:42 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:11.803 14:23:42 -- spdk/autotest.sh@398 -- # hostname 00:24:11.803 14:23:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:11.803 geninfo: WARNING: invalid characters removed from testname! 00:24:33.784 14:24:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:37.077 14:24:07 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:38.984 14:24:09 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:40.890 14:24:11 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:43.457 14:24:13 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:45.364 14:24:16 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:47.268 14:24:18 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:47.268 14:24:18 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:47.268 14:24:18 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:47.268 14:24:18 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:47.268 14:24:18 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:47.268 14:24:18 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:47.268 + [[ -n 5424 ]] 00:24:47.268 + sudo kill 5424 00:24:47.278 [Pipeline] } 00:24:47.294 [Pipeline] // timeout 00:24:47.299 [Pipeline] } 00:24:47.314 [Pipeline] // stage 00:24:47.320 [Pipeline] } 00:24:47.334 [Pipeline] // catchError 00:24:47.344 [Pipeline] stage 00:24:47.346 [Pipeline] { (Stop VM) 00:24:47.358 [Pipeline] sh 00:24:47.639 + vagrant halt 00:24:50.173 ==> default: Halting domain... 00:24:58.309 [Pipeline] sh 00:24:58.596 + vagrant destroy -f 00:25:01.131 ==> default: Removing domain... 00:25:01.143 [Pipeline] sh 00:25:01.434 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:25:01.496 [Pipeline] } 00:25:01.512 [Pipeline] // stage 00:25:01.517 [Pipeline] } 00:25:01.532 [Pipeline] // dir 00:25:01.537 [Pipeline] } 00:25:01.551 [Pipeline] // wrap 00:25:01.557 [Pipeline] } 00:25:01.570 [Pipeline] // catchError 00:25:01.580 [Pipeline] stage 00:25:01.582 [Pipeline] { (Epilogue) 00:25:01.596 [Pipeline] sh 00:25:01.881 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:07.169 [Pipeline] catchError 00:25:07.171 [Pipeline] { 00:25:07.184 [Pipeline] sh 00:25:07.468 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:07.468 Artifacts sizes are good 00:25:07.477 [Pipeline] } 00:25:07.491 [Pipeline] // catchError 00:25:07.502 [Pipeline] archiveArtifacts 00:25:07.510 Archiving artifacts 00:25:07.613 [Pipeline] cleanWs 00:25:07.627 [WS-CLEANUP] Deleting project workspace... 00:25:07.627 [WS-CLEANUP] Deferred wipeout is used... 00:25:07.633 [WS-CLEANUP] done 00:25:07.635 [Pipeline] } 00:25:07.651 [Pipeline] // stage 00:25:07.657 [Pipeline] } 00:25:07.672 [Pipeline] // node 00:25:07.678 [Pipeline] End of Pipeline 00:25:07.712 Finished: SUCCESS